Home
Jobs

9125 Terraform Jobs - Page 3

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderābād

On-site

JOB DESCRIPTION We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorgan Chase within the Consumer and community banking - Data technology, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Develops secure high-quality production code, and reviews and debugs code written by others Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture Leads communities of practice across Software Engineering to drive awareness and use of new and leading-edge technologies Becomes a technical mentor in the team Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 5+ years applied experience Experience in software engineering, including hands-on expertise in ETL/Data pipeline and data lake platforms like Teradata and Snowflake Hands-on practical experience delivering system design, application development, testing, and operational stability Proficiency in AWS services especially in Aurora Postgres RDS Proficiency in automation and continuous delivery methods Proficient in all aspects of the Software Development Life Cycle Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated proficiency in software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) In-depth knowledge of the financial services industry and their IT systems Preferred qualifications, capabilities, and skills Experience in re-engineering and migrating on-premises data solutions to and for the cloud Experience in Infrastructure as Code (Terraform) for Cloud based data infrastructure Experience in building on emerging cloud serverless managed services, to minimize/eliminate physical/virtual server footprint Advanced in Java plus Python (nice to have) ABOUT US

Posted 4 hours ago

Apply

0 years

3 - 5 Lacs

Hyderābād

On-site

Senior Analyst – Infrastructure Support - Deloitte Support Services India Private Limited Work you'll do You will be responsible to perform technical analysis of issues and outages as they occur across Deloitte’s Core IT Systems. This individual also performs research to troubleshoot and resolve the issue or, depending upon complexity, escalates the issue to higher-level systems administrators and network engineers. Responsible for researching and documenting various mitigation strategies and must maintain a current and thorough knowledge of customer technologies and their significance to customer operations. This individual must be able to prioritize remediation of issues in a 24 x 7 environment with critical uptime requirements. This job role requires the individual to support during the US business hours. You will collaborate closely with various teams, including system administrators, database administrators, and network engineers, to understand the business requirements, bring technical solutions to the leadership team The position requires existing knowledge and experience combined with demonstrated excellence in taking ownership of problems, leading technical discussions, transferring knowledge and innovation. Illustrative Duties and Responsibilities: Monitor system and service performance, identifying issues or disruptions in real-time. Lead troubleshooting efforts for incidents, working to quickly resolve service interruptions. Coordinate and communicate effectively during incidents to ensure timely updates to stakeholders. Provide critical support to the firm's IT Infrastructure, Applications and Cloud resources on performance, availability, and security. Provide timely response to all incidents, outages, and actionable alerts. Categorize issues for escalation to appropriate technical teams Recognize, identify, and prioritize incidents in accordance with customer business requirements, organizational policies, and operational impact. Build scripts that will make data evaluation processes more flexible or scalable across data sets with Develop scripts & dashboards to automate visualization, device and monitoring Evaluate internal systems for efficiency, problems, and inaccuracies, developing and maintaining protocols for handling, processing, and cleaning data. Perform post-incident analysis to identify the root causes of issues and contribute to implementing long-term solutions to prevent recurrence. Document findings and propose changes to system design or processes to improve reliability. Implement cost-effective solutions and ensure resource utilization efficiency Knowledge on cloud automation and infrastructure-as-code tools Monitor performance metrics and generate reports. Support multiple technical teams in 24 x 7 operational environments with high uptime requirement supporting during US Business hours. Working knowledge on Cloud and On-Premises infrastructure related alerts and issues Collaborate with support teams to troubleshoot and diagnose problems Collect and review performance reports for various systems, and report trends in hardware and application performance to assist senior technical personnel to predict future issues or outages Monitor a wide variety of information and network systems Document accordance with standard organization policies and procedures Notify customer and third-party service providers of issues, outages, and remediation status Work with internal and external technical and service teams to create and/or update knowledge base articles and CMDB Perform basic systems testing and operational tasks which includes but not limited to Event Correlation and Event Aggregation Experience in managing Major Incidents and contributing towards reducing the team’s average MTTR Trigger problem tickets regarding monitoring and event management, such as incorrect or missing alerts and provide inputs for Root Cause Analysis Identify and address day to day continuous improvement activities Perform miscellaneous job-related duties as assigned by the team manager Assist with fulfillment of basic Service Requests Education: Bachelor's degree in computer science, information technology, or a related field (or equivalent work experience). Years of Experience: 3-5 Relevant Experience: Directly related experience including working knowledge in Technology Operations, and Incident/Major Incident or Problem Management. Technical Skills: A Strong automation and scripting background A strong understanding of IT systems, and infrastructure with familiarity on the relevant technologies and tools used for incident management Excellent communication skills, both written and verbal, are essential for coordinating efforts and providing updates during major incidents Strong analytical and problem-solving skills to assess incidents, identify root causes, and develop effective solutions Working knowledge of enterprise infrastructure both on-premises and cloud hosted Hands on experience in ITSM modules including ServiceNow Hands on experience in Application monitoring tools; SCOM, Azure Monitor, Dynatrace, OMS, Moogsoft Nagios, New Relic or any market standard IT Operations Management Tool is a plus Hands on experience in Web Technologies (IIS, SharePoint, Apache, etc.) and awareness of basic database concepts (SQL or Oracle) Hands on experience in supporting & resolving Cloud infrastructure related incidents/events Knowledge of cloud automation and infrastructure-as-code tools (e.g., Terraform, AWS CloudFormation) Create and implement end to end automation solutions for various processes and systems using PowerShell and Python Identify opportunities for process automation to improve efficiency and productivity Foundational ability to analyze data and compare with defined performance measures Ability to follow SOPs and documented workflows Foundational knowledge of current business technologies, frameworks, methods, and tools Diagnose and resolve automation-related issues through seamless automation Ability to assist and perform with the identification of key issues or trends Ability to comprehend information risk concepts and principles Work closely with cross functional teams, including software developers and quality assurance engineers Ability to communicate technical and security concepts to a non-technical audience, both via written and verbal communication Working knowledge of ITIL Methodology and SAFe Agile Framework Understands information systems and cybersecurity Experience with Site Reliability Engineering (SRE) practices. Familiarity with observability tools such as Prometheus, Grafana, New Relic, or similar. Certifications in cloud technologies, ITIL, or service management are a plus Certifications Preferred Microsoft Certified System Engineer (MCSE) Microsoft Certified Solutions Expert (MCSE) Azure Certification AWS Certification ITIL Foundation Core Competencies Prior experience of programming, writing automated test scripts Ability to communicate effectively across all levels of management Highly collaborative personality with excellent written and verbal communication skills Ability to manage and prioritize multi-tasks under pressure Ability to work both independently and collaborate with the team Self-directed and detail oriented. Moderate supervision with some latitude for independent judgment Flexible, calm, and professional demeanor in a fast-paced, high-stress environment Highly Motivated, self-starter who has a very strong desire to learn Work Location Hyderabad Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304275

Posted 4 hours ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

AWS DevSecOps Engineer – CL4 Role Overview : As a DevSecOps Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments while leading the implementation of deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess intermediary knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate intermediate level understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : § A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. § Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. § 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). § 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. § 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. § 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. § 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. § Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. § General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) § General knowledge of networking, firewalls, and load balancers. § Strong preference will be given to candidates with AI/ML and GenAI. § Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302803

Posted 4 hours ago

Apply

0 years

0 Lacs

Telangana

On-site

At Bayer we’re visionaries, driven to solve the world’s toughest challenges and striving for a world where ,Health for all, Hunger for none’ is no longer a dream, but a real possibility. We’re doing it with energy, curiosity and sheer dedication, always learning from unique perspectives of those around us, expanding our thinking, growing our capabilities and redefining ‘impossible’. There are so many reasons to join us. If you’re hungry to build a varied and meaningful career in a community of brilliant and diverse minds to make a real difference, there’s only one choice. Manager Service Delivery POSITION PURPOSE: Working in a team of infrastructure specialists and engineers, an infrastructure engineer supports and maintains infrastructure solutions and services as directed and according to architectural guidelines. Individuals in this role will: Ensure services are delivered and used as required Work with and support third parties to provide infrastructure services YOUR TASKS AND RESPONSIBILITIES: Infrastructure Fundamentals Build, configure, administer, and support infrastructure technologies and solutions. These technologies and solutions can include computing, storage, networking, physical infrastructure, software, commercial-of-the-shelf (COTS), and open source packages and solutions. They can also include virtual and cloud computing such a Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) Modern Standards Approach Build proficiency in the most important principles of a modern standards approach and awareness of how these standards apply to the work undertaken Apply these principles under guidance Ownership and Initiative Own an issue until a new owner has been found or the problem has been mitigated or resolved Problem Management Investigate problems in systems, processes, and services, with an understanding of the level of a problem (e.g., strategic, tactical, or operational) Contribute to the implementation of remedies and preventative measures Service Focus Take inputs from stakeholders and establish solutions that facilitate the achievement of business objectives Systems Design Develop proficiency in the scripting tools and software that are essential in the design, build, management, and operation of infrastructure solutions and services Use scripting to automate common infrastructure management and operations tasks Translate logical designs into physical designs Produce detailed designs Effectively document all work using required standards, methods, and tools, including prototyping tools where appropriate Design systems characterized by managed levels of risk, manageable business and technical complexity, and meaningful impact Work with well understood technology and identify patterns Systems Integration Build and test simple interfaces between systems Work on more complex integration as part of a wider team Technical Understanding Develop proficiency with core technical concepts related to the role and apply them with guidance Testing Correctly execute test scripts under supervision Effectively incorporate testing into ways of working and delivered solutions and services Troubleshooting and Problem Resolution Troubleshoot and identify problems across different technology capabilities Site Reliability Engineering (SRE) Apply SRE principles to enhance the reliability, scalability, and performance of critical IT infrastructure and services. Design and implement monitoring, alerting, and incident response strategies to ensure high availability and rapid recovery. Collaborate with cross-functional teams to automate operational tasks and improve system observability. Operational Technology (OT) Expertise & Security Integrate Operational Technology (OT) systems with enterprise IT environments, ensuring secure and efficient data flow between industrial and business systems. Support and maintain OT infrastructure including SCADA, PLCs, and industrial network protocols, with a focus on cybersecurity and compliance. Drive continuous improvement initiatives across IT and OT domains to support digital transformation and smart manufacturing goals. Identify information security risks and the controls that can be used to mitigate threats within solutions and services WHO YOU ARE: Required Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. Proven experience in infrastructure engineering or a related role Experience with network administration, server management, and virtualization technologies Experience with cloud platforms (AWS, Azure, Google Cloud) and cloud infrastructure Proficiency in network protocols and technologies (TCP/IP, DNS, DHCP, VPN, etc.) Strong understanding of server and storage systems Experience with virtualization technologies Familiarity with scripting and automation Strong problem-solving skills and the ability to troubleshoot complex issues Ability to analyze system performance and identify optimization opportunities Capacity to understand and mitigate security risks Ability to work collaboratively in a team environment Preferred Advanced certifications (e.g., CCNP, Microsoft Certified, AWS Certified, Azure Solutions Expert) Experience with DevOps practices and tools Familiarity with Infrastructure as Code tools (Terraform, Ansible, Chef, Puppet) Understanding of compliance Ever feel burnt out by bureaucracy? Us too. That's why we're changing the way we work- for higher productivity, faster innovation, and better results. We call it Dynamic Shared Ownership (DSO). Learn more about what DSO will mean for you in your new role here https://www.bayer.com/enfstrategyfstrategy Bayer does not charge any fees whatsoever for recruitment process. Please do not entertain such demand for payment by any individuals / entities in connection with recruitment with any Bayer Group entity(ies) worldwide under any pretext. Please don’t rely upon any unsolicited email from email addresses not ending with domain name “bayer.com” or job advertisements referring you to an email address that does not end with “bayer.com”. For checking the authenticity of such emails or advertisement you may approach us at HROP_INDIA@BAYER.COM. YOUR APPLICATION Bayer is an equal opportunity employer that strongly values fairness and respect at work. We welcome applications from all individuals, regardless of race, religion, gender, age, physical characteristics, disability, sexual orientation etc. We are committed to treating all applicants fairly and avoiding discrimination. Location: India : Telangana : Shameerpet Division: Crop Science Reference Code: 847905 Contact Us + 022-25311234

Posted 4 hours ago

Apply

10.0 years

0 Lacs

Hyderābād

On-site

Project description Luxoft DXC Technology Company is an established company focusing on consulting and implementation of complex projects in the financial industry. At the interface been technology and business, we convince with our know-how, well-founded methodology and pleasure in success. As a reliable partner to our renowned customers, we support them in planning, designing and implementing the desired innovations. Together with the customer, we deliver top performance! For one of our clients in the Insurance segment, we are searching for a .Net Full Stack Developer. Responsibilities Delivering assigned tasks within the delivery cycle of an application development project. Tasks may include installing new systems applications; updating applications; performing configuration and testing activities; applications programming for assigned modules within a larger program. You will be working under supervision from the Technical lead/Project Manager or a Senior Developer to accomplish assigned tasks. At the same time contribute a design for specific deliverables and assist in the development of technical solutions. Job duties will include design, development and testing, using .Net technologies. Help maintain a rigorous software build and testing framework for continuous building and testing the developed software and keep track of failed builds or build issues. Prepare software technical documentation based on functional documentation and specifications, taking into account any specified functional and technical requirements. You will be part of a fast growing and exciting division whose culture is entrepreneurial, professional, rooted in teamwork and innovation. You will participate as part of a team and maintain good relationships with team members and customers. You are expected to work within an international environment, using a broad set of technologies and frameworks. Skills Must have At least 10 years of total proven hands on experience working on .Net technologies out of which at least 5+ years on full stack development with C#, .NET, Angular (in support versions), SQL, Java and Restful APIs. Strong proficiency in .NET framework and C# programming language. Familiarity with microservices architecture and its implementation. Solid understanding of web development best practices, design patterns, and architecture. IBM DB2 Knowledge. Experience with internal private cloud implementations via OpenStack and Open Shift platforms via IAC (Terraform). Enterprise content management architectures. Basic Knowledge in Linux. Nice to have Insurance industry experience. Prism Doc for Java application knowledge. Other Languages English: C1 Advanced Seniority Lead Hyderabad, IN, India Req. VR-115132 C#/VB.NET BCM Industry 17/06/2025 Req. VR-115132

Posted 4 hours ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview * Core Technology Infrastructure (CTI), part of the Global Technology & Operations organization, consists of more than 6,600 employees worldwide. With a presence in more than 35 countries, TI designs, builds and operates end-to-end technology infrastructure solutions and manages critical systems and platforms across the bank. TI delivers industry-leading infrastructure products and services to the company’s employees, customers and clients around jn87uthe world. Job Description* Terraform Software Developer – Candidate would be responsible for development for automation tools focused on Terraform Enterprise. Experience should include Terraform development and administration (back end of platform), System Administration (primarily Linux), integration with other automation tools like Horizon, Ansible Platform and GitHub. Understanding of SDLC processes and tools. Experience with cloud infrastructure as code, API’s, YAML, HCL, Python. Role also requires operational experience with monitoring of systems, incident, and problem management. . Responsibilities * Experience on using Terraform. Review bitbucket feature files, branching strategy, maintain bitbucket branches. Evaluate services of Azure & AWS and use Terraform to develop modules. Improve and optimize deployment challenges and help in delivering reliable solution. Interact with technical leads and architects to discover solutions that help solve challenges faced by Product Engineering teams. Be part of an enriching team and solve real Production engineering challenges. Improve knowledge in the areas of DevOps & Cloud Engineering by using enterprise tools and contributing to projects success. Programming or scripting skills in Python/Powershell. Any related Certification on cloud is nice to have. Ensure that all system deliverables meet quality objectives in functionality, performance, stability, security accessibility, and data quality. Provide work breakdown and estimates for tasks on agreed scope and development milestones to meet overall project timelines. Experience with the Agile/Scrum methodology. Strong verbal and written communication skills. Highly detailed oriented. Self-motivated, with the ability to work independently and as part of a team. Strong willingness & comfort taking on and challenging development approaches. Strong analytical and communication skills, ability to effectively work with both technical and non-technical resources. Must have strong debugging and troubleshooting skills. Able to implement and maintain Continuous Integration/Delivery (CI/CD) pipelines for the services. Able to implement and maintain automation required to improve code logistics from development to production. Assisting the team in instrumenting code for system availability. Maintaining and upgrading the deployment platforms as well as system infrastructure with Infrastructure-as-Code tools. Performing system administration and adhoc duties. Requirements: Education* B.E. / B.Tech / M.E. / M.Tech / MCA Experience Range* 8+ years Foundational Skills* Terraform development experience Terraform Enterprise Administration/Operations GO Language Java or Dotnet programming knowledge Python or shell scripting Database query development experience Desired Skills* AWS Change Management Horizon Tools (Ansible, Jira, Confluence, BitBucket) CI/CD Tools (GitHub, Jenkins, Artifactory) GCP JIRA Agile Methodology Python Powershell HashiCorp Configuration Language (HCL) Infrastructure as Code (IaC) Cloud Integration (Azure, AWS, GCP) Linux Administration Site Reliability Engineering Work Timings* 10.30AM to 7.30 PM Job Location* Chennai, Hyderabad, Mumbai

Posted 4 hours ago

Apply

5.0 years

6 - 9 Lacs

Hyderābād

Remote

Job Description Role Overview: A Data Engineer is responsible for designing, building, and maintaining robust data pipelines and infrastructure that facilitate the collection, storage, and processing of large datasets. They collaborate with data scientists and analysts to ensure data is accessible, reliable, and optimized for analysis. Key tasks include data integration, ETL (Extract, Transform, Load) processes, and managing databases and cloud-based systems. Data engineers play a crucial role in enabling data-driven decision-making and ensuring data quality across organizations. What will you do in this role: Develop comprehensive High-Level Technical Design and Data Mapping documents to meet specific business integration requirements. Own the data integration and ingestion solutions throughout the project lifecycle, delivering key artifacts such as data flow diagrams and source system inventories. Provide end-to-end delivery ownership for assigned data pipelines, performing cleansing, processing, and validation on the data to ensure its quality. Define and implement robust Test Strategies and Test Plans, ensuring end-to-end accountability for middleware testing and evidence management. Collaborate with the Solutions Architecture and Business analyst teams to analyze system requirements and prototype innovative integration methods. Exhibit a hands-on leadership approach, ready to engage in coding, debugging, and all necessary actions to ensure the delivery of high-quality, scalable products. Influence and drive cross-product teams and collaboration while coordinating the execution of complex, technology-driven initiatives within distributed and remote teams. Work closely with various platforms and competencies to enrich the purpose of Enterprise Integration and guide their roadmaps to address current and emerging data integration and ingestion capabilities. Design ETL/ELT solutions, lead comprehensive system and integration testing, and outline standards and architectural toolkits to underpin our data integration efforts. Analyze data requirements and translate them into technical specifications for ETL processes. Develop and maintain ETL workflows, ensuring optimal performance and error handling mechanisms are in place. Monitor and troubleshoot ETL processes to ensure timely and successful data delivery. Collaborate with data analyst and other stakeholders to ensure alignment between data architecture and integration strategies. Document integration processes, data mappings, and ETL workflows to maintain clear communication and ensure knowledge transfer. What should you have: Bachelor’s degree in information technology, Computer Science or any Technology stream 5+ years of working experience with enterprise data integration technologies – Informatica PowerCenter, Informatica Intelligent Data Management Cloud Services (CDI, CAI, Mass Ingest, Orchestration) Integration experience utilizing REST and Custom API integration Experiences in Relational Database technologies and Cloud Data stores from AWS, GCP & Azure Experience utilizing AWS cloud well architecture framework, deployment & integration and data engineering. Preferred experience with CI/CD processes and related tools including- Terraform, GitHub Actions, Artifactory etc. Proven expertise in Python and Shell scripting, with a strong focus on leveraging these languages for data integration and orchestration to optimize workflows and enhance data processing efficiency Extensive Experience in design of reusable integration pattern using the cloud native technologies Extensive Experience Process orchestration and Scheduling Integration Jobs in Autosys, Airflow. Experience in Agile development methodologies and release management techniques Excellent analytical and problem-solving skills Good Understanding of data modeling and data architecture principles Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Business, Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Management Process, Social Collaboration, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills: Job Posting End Date: 07/31/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R353285

Posted 4 hours ago

Apply

4.0 - 8.0 years

0 Lacs

Hyderābād

On-site

We are seeking a skilled Data Engineer with strong experience in Azure Data Services, Databricks, SQL, and PySpark to join our data engineering team. The ideal candidate will be responsible for building robust and scalable data pipelines and solutions to support advanced analytics and business intelligence initiatives." Key Responsibilities:  Design and implement scalable and secure data pipelines using Azure Data Factory, Databricks, and Synapse Analytics.  Develop and maintain efficient ETL/ELT workflows into and within Databricks.  Write complex SQL queries for data extraction, transformation, and analysis.  Develop and optimize data transformation scripts using PySpark.  Ensure data quality, data governance, and performance optimization across all pipelines.  Collaborate with data architects, analysts, and business stakeholders to deliver reliable data solutions.  Perform data modelling and design for both structured and semi-structured data.  Monitor data pipelines and troubleshoot issues to ensure data integrity and timely delivery.  Contribute to best practices in cloud data architecture and engineering. Required Skills:  4–8 years of experience in data engineering or related fields.  Strong experience with Azure Data Services (ADF, Synapse, Databricks, Azure Storage).  Proficient with Databricks data warehouse – including data ingestion, Snow pipe, streams & tasks.  Advanced SQL skills, including performance tuning and complex query building.  Hands-on experience with PySpark for large-scale data processing and transformation.  Experience with ETL/ELT frameworks, orchestration, and scheduling.  Familiarity with data modelling concepts (dimensional/star schema).  Good understanding of data security, role-based access, and auditing in Snowflake and Azure. Preferred/Good to Have:  Experience with CI/CD pipelines and DevOps for data workflows.  Exposure to Power BI or similar BI tools.  Familiarity with Git, Terraform, or infrastructure-as-code (IaC) in cloud environments.  Experience with Agile/Scrum methodologies Job Type: Full-time Work Location: In person

Posted 4 hours ago

Apply

0 years

25 Lacs

India

On-site

Sr. Python Developer  Experience working on Python SDK for AWS, GCP and OCI is a plus.  Strong knowledge on Python development with hands on experience on API and ORM frameworks like Flask, SQLAlchemy  Experience on Async and Event based task execution programming  Strong knowledge in Windows and Linux environments  Experience in automation tools like Ansible or Chef  Hands on experience at least one of the cloud provider.  Good at writing Terraform or cloud native template.  Knowledge in container technology  Hands on experience CI/CD Job Types: Full-time, Permanent Pay: ₹2,500,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person

Posted 4 hours ago

Apply

10.0 years

7 - 20 Lacs

India

On-site

About MostEdge At MostEdge , we’re on a mission to accelerate commerce and build sustainable, trusted experiences . Our slogan — Protect Every Penny. Power Every Possibility. —reflects our commitment to operational excellence, data integrity, and real-time intelligence that help retailers run smarter, faster, and stronger. Our systems are mission-critical and designed for 99.99999% uptime , powering millions of transactions and inventory updates daily . We work at the intersection of AI, microservices, and retail commerce—and we win as a team. Role Overview We are looking for a Senior Database Administrator (DBA) to own the design, implementation, scaling, and performance of our data infrastructure. You will be responsible for mission-critical OLTP systems spanning MariaDB, MySQL, PostgreSQL, and MongoDB , deployed across AWS, GCP, and containerized Kubernetes clusters . This role plays a key part in ensuring data consistency, security, and speed across billions of rows and real-time operations. Scope & Accountability What You Will Own Manage and optimize multi-tenant, high-availability databases for real-time inventory, pricing, sales, and vendor data. Design and maintain scalable, partitioned database architectures across SQL and NoSQL systems. Monitor and tune query performance and ensure fast recovery, replication, and backup practices. Partner with developers, analysts, and DevOps teams on schema design, ETL pipelines, and microservices integration . Maintain security best practices, audit logging, encryption standards, and data retention compliance . What Success Looks Like 99.99999% uptime maintained across all environments. <100ms query response times for large-scale datasets. Zero unplanned data loss or corruption incidents. Developer teams experience zero bottlenecks from DB-related delays. Skills & Experience Must-Have 10+ years of experience managing OLTP systems at scale. Strong hands-on with MySQL, MariaDB, PostgreSQL, and MongoDB . Proven expertise in replication, clustering, indexing, and sharding . Experience with Kubernetes-based deployments , Kafka queues , and Dockerized apps . Familiarity with AWS S3 storage , GCP services, and hybrid cloud data replication. Experience in startup environments with fast-moving agile teams. Track record of creating clear documentation and managing tasks via JIRA . Nice-to-Have Experience with AI/ML data pipelines , vector databases, or embedding stores. Exposure to infrastructure as code (e.g., Terraform, Helm). Familiarity with LangChain, FastAPI , or modern LLM-driven architectures. How You Reflect Our Values Lead with Purpose : You enable smarter, faster systems that empower our retail customers. Build Trust : You create safe, accurate, and recoverable environments. Own the Outcome : You take responsibility for uptime, audits, and incident resolution. Win Together : You collaborate seamlessly across product, ops, and engineering. Keep It Simple : You design intuitive schemas, efficient queries, and clear alerts. Why Join MostEdge? Work on high-impact systems powering real-time retail intelligence . Collaborate with a passionate, values-driven team across AI, engineering, and operations. Build at scale—with autonomy, ownership, and cutting-edge tech. Job Types: Full-time, Permanent Pay: ₹727,996.91 - ₹2,032,140.73 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Morning shift US shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person Expected Start Date: 31/07/2025

Posted 4 hours ago

Apply

40.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Overview KLA is a global leader in diversified electronics for the semiconductor manufacturing ecosystem. Virtually every electronic device in the world is produced using our technologies. No laptop, smartphone, wearable device, voice-controlled gadget, flexible screen, VR device or smart car would have made it into your hands without us. KLA invents systems and solutions for the manufacturing of wafers and reticles, integrated circuits, packaging, printed circuit boards and flat panel displays. The innovative ideas and devices that are advancing humanity all begin with inspiration, research and development. KLA focuses more than average on innovation and we invest 15% of sales back into R&D. Our expert teams of physicists, engineers, data scientists and problem-solvers work together with the world’s leading technology providers to accelerate the delivery of tomorrow’s electronic devices. Life here is exciting and our teams thrive on tackling really hard problems. There is never a dull moment with us. Group/Division Enabling the movement toward advanced chip design, KLA's Measurement, Analytics and Control group (MACH) is looking for the best and brightest research scientists, software engineers, application development engineers and senior product technology process engineers to join our team. The MACH team's mission is to collaborate with our customers to innovate technologies and solutions that detect and control highly complex process variations—at their source—rather than compensate for them at later stages of the manufacturing process. With over 40 years of semiconductor process control experience, chipmakers around the globe rely on KLA to ensure that their fabs ramp next-generation devices to volume production quickly and cost-effectively. Our MACH team develops leading-edge solutions for patterning process analytics and control technologies, thereby providing customers with critical insight at the feature level, field level and cross-wafer analysis. Our teams also develop advanced modeling simulation, data analytics and process control modeling technologies. As a member of the MACH team, you’ll be joining the most sophisticated and successful process-control company in the semiconductor industry--working across functions to solve the most complex technical problems in the digital age. Job Description/Preferred Qualifications Required Qualifications: Designing and implementing physical and virtual server infrastructures In-depth knowledge of one or more flavors of Linux: RedHat, CentOS, Rocky, Ubuntu Experience in System-D, iSCSI, Multi-pathing, and Linux HA Experience creating Visio Diagrams to document deployments Experience Racking and Cabling in a Datacenter Environment Ability to code and develop Shell and Python scripts or experience using Ansible/Terraform Strong understanding of TCP/IP fundamentals and Knowledge of protocols, DNS, DHCP, HTTP, LDAP, SMTP. Experience with Storage Appliance Prefer Qualifications: Knowledge of Docker and Kubernetes deployments Experience with VMWare or KVM Virtualization Environments Knowledge of Network infrastructure technologies, such as firewalls, switches, and routers Knowledge of troubleshooting network and storage issues. Knowledge of cloud (AWS / Azure) IaaS, EC2, EKS, AKS, AVD etc Skills and Abilities: Team Orientation & Interpersonal – Highly motivated teammate with ability to develop and maintain collaborative relationships with all levels within and external to the organization. Organization & Time Management – Able to plan, schedule, organize, and follow up on tasks related to the job to achieve goals within or ahead of established time frames. Multi-task - Ability to expeditiously organize, coordinate, manage, prioritize, and perform multiple tasks simultaneously to swiftly assess a situation, determine a logical course of action, and apply the appropriate response. Adaptability to Change – Able to be flexible and supportive, and able to assimilate change positively and proactively in rapid growth environment. Minimum Qualifications Doctorate (Academic) Degree and 0 years related work experience; Master's Level Degree and related work experience of 3 years; Bachelor's Level Degree and related work experience of 5 years We offer a competitive, family friendly total rewards package. We design our programs to reflect our commitment to an inclusive environment, while ensuring we provide benefits that meet the diverse needs of our employees. KLA is proud to be an equal opportunity employer Be aware of potentially fraudulent job postings or suspicious recruiting activity by persons that are currently posing as KLA employees. KLA never asks for any financial compensation to be considered for an interview, to become an employee, or for equipment. Further, KLA does not work with any recruiters or third parties who charge such fees either directly or on behalf of KLA. Please ensure that you have searched KLA’s Careers website for legitimate job postings. KLA follows a recruiting process that involves multiple interviews in person or on video conferencing with our hiring managers. If you are concerned that a communication, an interview, an offer of employment, or that an employee is not legitimate, please send an email to talent.acquisition@kla.com to confirm the person you are communicating with is an employee. We take your privacy very seriously and confidentially handle your information. Show more Show less

Posted 4 hours ago

Apply

2.0 years

1 - 9 Lacs

Hyderābād

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer II at JPMorgan Chase within the Consumer and Community banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 2+ years of applied experience Expertize and good hands on experience with Kubernetes, Terraform and AWS. Full SDLC lifecycle for software deployment - Release management and SDLC including experienced in Jenkins as well as Spinnaker pipeline deployments Proficient with DevOps practices and CI/CD pipelines Advanced in one or more programming language(s) - Python, Java, Groovy Full SDLC lifecycle for software deployment - Release management and SDLC including experienced in Jenkins as well as Spinnaker pipeline deployments Third Party Vendor Data Management and Lifecycle, and Engagement for Trouble Tickets using DevOps Process Must adhere to weekly support Rotation schedules including weekends (standard DevOps Cadence) Preferred qualifications, capabilities, and skills Familiarity with modern front-end technologies Exposure to cloud technologies

Posted 4 hours ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 4 hours ago

Apply

2.0 years

4 - 7 Lacs

Hyderābād

On-site

Working in Application Support means you'll use both creative and critical thinking skills to maintain application systems that are crucial to the daily operations of the firm. As an Application Support at JPMorgan Chase within the Employee Platform, you will use both creative and critical thinking skills to maintain application systems that are crucial to the daily operations of the firm. You'll work collaboratively in teams on a wide range of projects based on your primary area of focus: design or programming. While learning to fix application and data issues as they arise, you'll also gain exposure to software development, testing, deployment, maintenance, and improvement, in addition to production lifecycle methodologies and risk guidelines. Finally, you'll have the opportunity to develop professionally —and to grow your career in any direction you choose. Job responsibilities Participates in triaging, examining, diagnosing, and resolving incidents and work with others to solve problems at their root. Participate in weekend support rota to ensure adequate business support coverage during core hours and weekend (rota basis) as part of a global team. Assist in the monitoring of production environments for anomalies and address issues utilizing standard observability tools. Identify issues for escalation and communication and provide solutions to the business and technology stakeholders. Participates in root cause calls and drives actions to resolution with a keen focus on preventing incident. Recognizes the manual activity within your role and proactively works towards eliminating it through either system engineering or updating application code. Required qualifications, capabilities, and skills. Formal training or certification on Application Support concepts and 2+ years of experience or equivalent expertise troubleshooting, resolving, and maintaining information technology services. Experience in observability and monitoring tools and techniques. Experience with one or more general purpose programming languages (Python or C#) and/or automation scripting (PowerShell Script) Experience in observability and monitoring tools and techniques. Familiar with tools such as Splunk, ServiceNow, Dynatrace, etc. Experience in CI/CD tools like Jenkins, Bitbucket, GitLab, Terraform Eagerness to participate in learning opportunities to enhance one’s effectiveness in executing day-to-day project activities. Preferred qualifications, capabilities, and skills Experience and understanding of Genetec Security Desk Understanding of cloud infrastructure

Posted 4 hours ago

Apply

5.0 years

0 Lacs

Gurgaon

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1

Posted 4 hours ago

Apply

3.0 years

0 Lacs

Gurgaon

On-site

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1

Posted 4 hours ago

Apply

4.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: DevOps Engineer Location: Chennai (full-time, at office) Years of Experience: 4-8 years Job Summary: We are seeking a skilled DevOps engineer with knowledge of automation, continuous integration, deployment and delivery processes. The ideal candidate should be a self-starter with hands-on production experience, and having excellent communication skills. Key Responsibilities: ● Infrastructure as Code: first principles on cloud infrastructure, system design, and application deployments. ● CI/CD pipelines: to design, implement, troubleshoot, and maintain CI/CD pipelines. ● System administration: skills with systems, networking, and security fundamentals. ● Proficiency in coding: with hands-on experience in programming languages, and ability to write, review, and troubleshoot code for infrastructure. ● Monitoring and observability: to track performance and health of services and configure alerts with interactive dashboards for reporting. ● Security best practices and familiarity with audits, compliance, and regulation. ● Communication skills: to clearly and effectively discuss and collaborate across cross-functional teams. ● Documentation: using Agile methodologies, Jira, and Git. Qualification: ● Education: Bachelor's degree in CS, IT, or a related field (or equivalent work experience). ● Skills*: Infrastructure: Docker, Kubernetes, ArgoCD, Helm, Chronos, GitOps. Automation: Ansible, Puppet, Chef, Salt, Terraform, OpenTofu. CI/CD: Jenkins, CircleCI, ArgoCD, GitLab, GitHub Actions. Cloud platforms: Amazon Web Services (AWS), Azure, Google Cloud. Operating Systems: Windows, *nix distributions (Fedora, Red Hat, Ubuntu, Debian), *BSD, Mac OS X. Monitoring and observability: Prometheus, Grafana, Elasticsearch, Nagios. Databases: MySQL, PostgreSQL, MongoDB, Qdrant, Redis. Programming Languages: Python, Bash, JavaScript, TypeScript, Golang. Documentation: Atlassian Jira, Confluence, Git. (* Proficient in one or more tools in each category.) Additional Requirements: • Include GitHub or GitLab profile link in the resume. • Only candidates with a Computer Science or Information Technology engineering background will be considered. • Primary Operating System should be Linux (Ubuntu or any distribution) or macOS. Show more Show less

Posted 4 hours ago

Apply

8.0 years

4 - 8 Lacs

Gurgaon

On-site

- 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. - Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. - Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. - Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 4 hours ago

Apply

4.0 - 6.0 years

0 Lacs

Gurgaon

On-site

Locations: Bengaluru | Gurgaon Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a part of BCG's X team, you will work closely with consulting teams on a diverse range of advanced analytics and engineering topics. You will have the opportunity to leverage analytical methodologies to deliver value to BCG's Consulting (case) teams and Practice Areas (domain) through providing analytical and engineering subject matter expertise.As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, systems, and solutions that empower our clients to make informed business decisions. You will collaborate closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver high-quality data solutions that meet our clients' needs. YOU'RE GOOD AT Delivering original analysis and insights to case teams, typically owning all or part of an analytics module whilst integrating with a case team. Design, develop, and maintain efficient and robust data pipelines for extracting, transforming, and loading data from various sources to data warehouses, data lakes, and other storage solutions. Building data-intensive solutions that are highly available, scalable, reliable, secure, and cost-effective using programming languages like Python and PySpark. Deep knowledge of Big Data querying and analysis tools, such as PySpark, Hive, Snowflake and Databricks. Broad expertise in at least one Cloud platform like AWS/GCP/Azure.* Working knowledge of automation and deployment tools such as Airflow, Jenkins, GitHub Actions, etc., as well as infrastructure-as-code technologies like Terraform and CloudFormation. Good understanding of DevOps, CI/CD pipelines, orchestration, and containerization tools like Docker and Kubernetes. Basic understanding on Machine Learning methodologies and pipelines. Communicating analytical insights through sophisticated synthesis and packaging of results (including PPT slides and charts) with consultants, collecting, synthesizing, analyzing case team learning & inputs into new best practices and methodologies. Communication Skills: Strong communication skills, enabling effective collaboration with both technical and non-technical team members. Thinking Analytically You should be strong in analytical solutioning with hands on experience in advanced analytics delivery, through the entire life cycle of analytics. Strong analytics skills with the ability to develop and codify knowledge and provide analytical advice where required. What You'll Bring Bachelor's / Master's degree in computer science engineering/technology At least 4-6 years within relevant domain of Data Engineering across industries and work experience providing analytics solutions in a commercial setting. Consulting experience will be considered a plus. Proficient understanding of distributed computing principles including management of Spark clusters, with all included services - various implementations of Spark preferred. Basic hands-on experience with Data engineering tasks like productizing data pipelines, building CI/CD pipeline, code orchestration using tools like Airflow, DevOps etc.Good to have: - Software engineering concepts and best practices, like API design and development, testing frameworks, packaging etc. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge on web development technologies. Understanding of different stages of machine learning system design and development Who You'll Work With You will work with the case team and/or client technical POCs and border X team. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 4 hours ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us Our leading SaaS-based Global Growth Platform™ enables clients to expand into over 180 countries quickly and efficiently, without the complexities of establishing local entities. At G-P, we’re dedicated to breaking down barriers to global business and creating opportunities for everyone, everywhere. Our diverse, remote-first teams are essential to our success. We empower our Dream Team members with flexibility and resources, fostering an environment where innovation thrives and every contribution is valued and celebrated. The work you do here will positively impact lives around the world. We stand by our promise: Opportunity Made Possible. In addition to competitive compensation and benefits, we invite you to join us in expanding your skills and helping to reshape the future of work. At G-P, we assist organizations in building exceptional global teams in days, not months—streamlining the hiring, onboarding, and management process to unlock growth potential for all. About The Role As a Principal AI Engineer, you will design, develop, and deploy AI solutions that address complex business challenges. This role requires advanced expertise in artificial intelligence, including machine learning and natural language processing, and the ability to implement these technologies in production-grade systems. Key Responsibilities Develop innovative, scalable AI solutions for real business problems. Drive the full lifecycle of projects from conception to deployment, ensuring alignment with business objectives. Own highly open-ended projects end-to-end, from the analysis of business requirements to the deployment of solutions. Expect to dedicate about 20% of your time to understanding problems and collaborating with stakeholders. Manage complex data sets, design efficient data processing pipelines, and work on robust models. Expect to spend approximately 80% of your time on data and ML engineering tasks related to developing AI systems. Work closely with other AI engineers, product managers, and stakeholders to ensure that AI solutions meet business needs and enhance user satisfaction. Write clear, concise, and comprehensive technical documentation for all projects and systems developed. Stay updated on the latest developments in the field. Explore and prototype new technologies and approaches to address specific challenges faced by the business. Develop and maintain high-quality machine learning services. Prioritize robust engineering practices and user-centric development. Able to work independently and influence at different levels of the organization. Highly motivated and result driven Required Skills And Qualifications Master’s degree in Computer Science, Machine Learning, Statistics, Engineering, Mathematics, or a related field Deep understanding and practical experience in machine learning and natural language processing especially LLMs Strong foundational knowledge in statistical modeling, probability, and linear algebra Extensive practical experience with curating datasets, training models, analyzing post-deployment data, and developing robust metrics to ensure model reliability Experience developing and maintaining machine learning services for real-world applications at scale Strong Python programming skills High standards for code craftsmanship (maintainable, testable, production-ready code) Proficiency with Docker Knowledge of system design and cloud infrastructure for secure and scalable AI solutions. Proficiency with AWS Proven track record in driving AI projects with strong technical leadership. Excellent communication skills when engaging with both technical and non-technical stakeholders Nice To Have Qualifications Experience with natural language processing for legal applications Proficiency with Terraform React and Node.js experience If you're ready to make an impact in a high-paced startup environment, with a team that embraces innovation and hard work, G-P is the place for you. Be ready to hustle and put in the extra hours when needed to drive our mission forward. We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks. G-P. Global Made Possible. G-P is a proud Equal Opportunity Employer, and we are committed to building and maintaining a diverse, equitable and inclusive culture that celebrates authenticity. We prohibit discrimination and harassment against employees or applicants on the basis of race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, genetic information, or any other legally protected status. G-P also is committed to providing reasonable accommodations to individuals with disabilities. If you need an accommodation due to a disability during the interview process, please contact us at careers@g-p.com. Show more Show less

Posted 4 hours ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role: Python + microservices Experience range: 8-10 years Location: Current location must be Bangalore NOTE: Candidate interested for Walk-in drive in Bangalore must apply Job description: Preferred Qualifications: Experience with cloud platforms is a plus. Familiarity with Python frameworks (Flask, FastAPI, Django). Understanding of DevOps practices and tools (Terraform, Jenkins). Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver). Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modelling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Show more Show less

Posted 4 hours ago

Apply

8.0 years

20 - 28 Lacs

Gurgaon

On-site

Job Title: DevOps Engineer Location: Gurgaon (Work Form Office) Job Type: Full Time Role Experience Level: 8-12 Years Job Summary: We are looking for a skilled and proactive DevOps Engineer to join our technology team. The ideal candidate will be responsible for managing the infrastructure, automating workflows, and ensuring smooth deployment and integration of code across various environments. You will work closely with developers, QA teams, and system administrators to improve CI/CD pipelines, scalability, reliability, and security. Key Responsibilities: Design, build, and maintain efficient CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Automate provisioning, deployment, monitoring, and scaling of infrastructure. Manage and monitor cloud services (AWS, Azure, GCP) and on-premises environments. Configure and manage container orchestration (Docker, Kubernetes). Implement infrastructure as code using tools like Terraform, CloudFormation, or Ansible. Ensure high availability, performance, and security of production systems. Monitor logs, metrics, and application performance; implement alerting and incident response. Collaborate with development and QA teams to streamline release processes. Required Skills and Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in a DevOps or Systems Engineering role. Proficiency with Linux-based infrastructure. Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP). Strong scripting skills (Bash, Python, PowerShell, etc.). Experience with configuration management and IaC tools (e.g., Terraform, Ansible). Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of networking, security, DNS, load balancing, and firewalls. Preferred Qualifications: Certification in AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, ELK Stack, Datadog, etc. Exposure to Agile/Scrum methodologies. Knowledge of security best practices in DevOps environments. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,800,000.00 per year Work Location: In person Speak with the employer +91 9319571799

Posted 4 hours ago

Apply

5.0 years

5 - 8 Lacs

Gurgaon

On-site

About the team: The cloud platform teams design, implement, support and improve the cloud technology stack to ensure systems security, reliability & availability. We design, deploy & maintain advanced log analysis & monitoring systems; We lead automated, agile based release management to our 24x7 online application stack; We maintain and develop our ci/cd pipelines & deployment automation tools for our release processes. We implement IaC using Terraform as well as managing more IaaS based infrastructure; We manage and support all PaaS platforms in use in our business! Who we are looking for: We are looking for a highly competent, reliable, self-starting IT generalist with experience in a Windows Server administration or SRE role with web application support. You must have strong infrastructure knowledge with great analysis & problem solving skills - perhaps youve also done some scripting or automation work in a previous role or in a part time capacity? Come talk to us! Responsibilities - Resolve complex technical issues in infrastructure, applications, platforms & backoffice systems Manage and monitor Azure cloud resources, performance, security, and costs using various tools and frameworks Provide third line of support for issues from the front-line incident managers Deploy software releases to our Azure based systems using a squad methodology Be able to clearly think through, communicate & participate in the wider ITS sessions Be part of an on-call rota as needed We are looking for someone who has: 5 years experience supporting Azure cloud infrastructure. 3+ years experience of supporting web application technologies At least 2 years experience using Octopus Deploy Excellent knowledge of Azure technologies and the Azure stack This role requires knowledge of all of the following: TCP/IP, DNS, DHCP, SSL, IIS, Windows Server OS High proficiency in PowerShell & Bash Strong IT admin skills, networking and troubleshooting skills. Excellent verbal & written communications skills A can-do attitude & works with minimal oversight with high standards The ability to prioritise & work in a fast paced, high volume, agile environment. Knowledge of Terraform Knowledge of Hyland Alfresco and HIDP Better if you have: Experience in automating and streamlining a software development lifecycle (SDLC), configuration management etc Experience using Google Cloud Platform Experience working in a regulated financial entity Experience in working with agile methodologies such as Scrum or Kanban Insight: Can we look at staff who have minimum 3 years Azure experience with approx. ~ 5 years experience overall , Octopus Deploy and scripting would also be a bonus.

Posted 4 hours ago

Apply

4.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 80366 Date: Jun 16, 2025 Location: Delhi CEC Designation: Consultant Entity: Job Description Location Gurgaon About your role The position is for a Java Development Specialist. The role involves doing development involving core skills of Java (OOPS, Collections, Multi- Threading), SQL, Spring Core, Spring MVC, Hibernate etc.Knowledge of working in Agile Team with DevOps principles would be an additional advantage. This would also involve intensive interaction with the business and other Technology groups, and hence strong communications skills and the ability to work under pressure are absolute must.The candidate is expected to display professional ethics in his/her approach to work and exhibit a high level ownership within a demanding working environment. Essential Skills Minimum 4 years of experience with Webservices & REST APIs Minimum 2 years of experience with cloud – any one of AWS/Azure/Cloudfoundary/Heroku/GCP UML, Design patterns, data structures, clean coding Experience in CI/CD, TDD, DevOps, CI/CD tools - Jenkins/UrbanCode/SonarQube/ Bamboo AWS Lambda, Step Functions, DynamoDB, API Gateway, Cognito, S3, SNS, VPC, IAM, EC2, ECS, etc. Hands on with coding and debugging. Should be able to write high quality code optimized for performance. Good analytical & problem-solving skills and should be good with algorithms. Spring MVC, SpringBoot, Spring Batch, Spring Security. Git, Maven/Gradle. Hibernate (or JPA). Key Responsibilities Work on Java/PaaS applications. Own and deliver technically sound solutions for the ‘Integration Layer’ product. Work and develop on Java/FIL PaaS/AWS applications. Interact with senior architects and other consultants to understand and review the technical solution and direction. Communicate with business analysts to discuss various business requirements. Proactively refactor code/solution, be aggressive about tech debt identification and reduction Develop, maintain, and troubleshoot issues; and take a leading role in the ongoing support and enhancements of the applications. Help in maintaining the standards, procedures, and best practices in the team. Also help his team to follow these standards. Prioritisation of requirements in pipeline with stakeholders. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 4 to 6 years of experience with application development in Java and related frameworks Skills – nice to have: Spring batch, Spring integration. PL-SQL, Unix IaaC (Infrastructure as code) - Terraform/SAM/Cloudformation JMS, IBM MQ Layer7/Apigee Docker/Kubernetes Microsoft Teams development experience Linux basic

Posted 4 hours ago

Apply

5.0 - 8.0 years

0 Lacs

Bhubaneshwar

On-site

Position: Senior Security Engineer (NV58FCT RM 3325) Job Description: 5–8 years of experience in security engineering, preferably with a focus on cloud-based systems. Strong understanding of cloud infrastructure (AWS/GCP/Azure), including IAM, VPC, security groups, key management, etc Hands-on experience with security tools (e.g., AWS Security Hub, Azure Defender, Prisma Cloud, CrowdStrike, Burp Suite, Nessus, or equivalent). Familiarity with containerization and orchestration security (Docker, Kubernetes). Proficient in scripting (Python, Bash, etc.) and infrastructure automation (Terraform, CloudFormation, etc.). In-depth knowledge of encryption, authentication, authorization, and secure communications. Experience interfacing with clients and translating security requirements into actionablesolutions. Preferred Qualifications: Certifications such as CISSP, CISM, CCSP, OSCP, or cloud-specific certs (e.g., AWS Security Specialty). Experience with zero trust architecture and DevSecOps practices. Knowledge of secure mobile or IoT platforms is a plus. Soft Skills: Strong communication and interpersonal skills to engage with clients and internal teams. Analytical mindset with attention to detail and a proactive attitude toward risk mitigation. Ability to prioritize and manage multiple tasks in a fast-paced environment Document architectures, processes, and procedures, ensuring clear communication across the team. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: BhubaneshwarNoida Experience: 5 - 8 Years Notice period: 0-30 days

Posted 4 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies