Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Python API Developer with AI/ML Exposure-Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned API developer with AI/ML Exposure to join our team of professionals. As part of data science team, you'll need to collaborate with the AI/ML engineers to integrate machine learning models and solutions into solution architecture. This will require understanding the core principles of AI/ML development, being enthusiastic about learning new information, and staying updated with the latest trends in this sphere. Key Responsibilities: Develop and maintain scalable, robust, and high-performance Python web applications and services with AI/ML models at their core Ensure high performance and responsiveness to front-end requests Cover the code with a comprehensive test suite Accurately estimate and plan your work Write clear and concise technical documentation. Skills and Qualifications Needed: 3+ years of experience with Python, including both Sync and Async programming [ Must have ] Proficient in building APIs and web applications using FastAPI and Django Strong knowledge of SQL and NoSQL database systems Familiar with Kafka for real-time data streaming and messaging Hands-on experience with Docker for containerization and deployment Skilled in implementing background task processing with Celery Solid understanding of the HTTP protocol and RESTful API design principles Comfortable working in Linux/UNIX environments, including shell scripting and system tools Passionate about writing reliable code with a focus on unit and integration testing English proficiency: Intermediate level or higher (written and spoken) A degree in computer science or a similar field Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Preferred Skills : Good to have experience with microservices architecture Exposure to Machine Learning (ML) and Large Language Models (LLMs) will be added benefit Familiarity with AWS Cloud services Experience designing and managing ETL workflows Working knowledge of Terraform What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and consulting services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Andhra Pradesh, India
On-site
Job Summary - To truly excel and drive innovation in an ever-evolving technological landscape, every member of PwC is expected to be a purpose-driven and values-led leader at every level. Our PwC Professional framework serves as a cornerstone for professional growth, offering a clear set of expectations across all functions, geographies, and career paths. It provides transparency on the technical, leadership, and business skills required for success, both now and in the future. As an Associate – Full Stack Developer, you will work as part of a high-performance team, helping to design, develop, and deploy cutting-edge applications that drive business transformation. You will play a pivotal role in architecting scalable solutions, building robust APIs, enhancing UI/UX experiences, and ensuring database optimization. Core Responsibilities At This Level Include Developing and maintaining full-stack applications using ASP.NET Core APIs, React, and SQL Server with a strong focus on scalability and performance. Leading by example, mentoring junior developers, and ensuring high-quality code delivery through code reviews and best practices. Demonstrating analytical thinking and problem-solving skills to address complex technical challenges and translate business requirements into effective technical solutions. Utilizing modern development frameworks, tools, and methodologies to extract insights, optimize performance, and ensure application security. Collaborating across teams to align development with business goals, ensuring smooth integration with existing platforms. Building and consuming RESTful APIs and Microservices, leveraging industry best practices for asynchronous processing, cloud deployment, and containerization. Reviewing and optimizing application performance, debugging critical issues, and implementing scalable database solutions using SQL Server stored procedures and query optimization techniques. Embracing agile methodologies by actively participating in sprint planning, backlog grooming, and retrospectives to drive continuous improvement. Using effective and structured communication to collaborate with cross-functional teams, stakeholders, and clients, ensuring seamless knowledge sharing and project alignment. Adhering to PwC's Code of Ethics & Business Conduct, ensuring security, compliance, and integrity in all development efforts. By joining PwC, you will become part of a technology-driven and people-focused culture that fosters growth, learning, and innovation. If you thrive on solving complex problems and have a passion for software development, this is the place for you! Minimum Degree Required (BQ) *: Bachelor’s Degree Degree Preferred Bachelor’s Degree Required Field(s) Of Study (BQ) Computer Science, Data Analytics, Accounting Preferred Field(s) Of Study Minimum Year(s) of Experience (BQ) *: US 2 years of experience Certification(s) Preferred Required Knowledge/Skills (BQ): Preferred Knowledge/Skills *: Purpose Statement The primary purpose of this position is to design, develop, and maintain scalable full-stack applications that meet business and client needs. This role requires expertise in both frontend and backend technologies, including ASP.NET Core APIs, React, SQL Server, and microservices architecture. The candidate will be responsible for developing robust, high-performance applications, ensuring code quality, security, and efficiency. The Associate – Full Stack Developer will collaborate with cross-functional teams to build user-centric applications, optimize database performance, implement secure coding practices, and ensure seamless integration of software components. The role also involves mentoring junior developers, participating in code reviews, and aligning development activities with business objectives. Essential Functions Develop and maintain scalable web applications using ASP.NET Core, C#, and VB.Net. Build interactive and responsive user interfaces (UI) using React, JavaScript, HTML, CSS, and JQuery. Implement RESTful APIs and Web Services to ensure smooth backend-to-frontend communication. Follow best practices in asynchronous processing (TPL) and multi-threading to optimize performance. Work with SQL Server, design and optimize stored procedures, tables, views, and queries. Ensure efficient data retrieval and processing for large-scale applications. Automate data imports, transformations, and ETL processes for improved efficiency. Design and develop microservices-based architecture for modular and scalable applications. Implement service-oriented architecture (SOA) principles to improve maintainability. Utilize Git for version control, managing repositories, and ensuring code collaboration. Conduct code reviews, debugging, and performance testing to maintain high-quality standards. Follow secure coding practices, ensuring compliance with security regulations. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives. Work closely with business analysts, UI/UX designers, and DevOps teams to align development goals. Communicate technical concepts effectively with non-technical stakeholders. Perform unit testing, integration testing, and system testing to ensure software reliability. Leverage CI/CD pipelines for smooth deployment and automation of software releases. Work with cloud services (AWS/Azure) to deploy and manage applications in a cloud-based environment. Stay up-to-date with emerging technologies, frameworks, and industry best practices. Identify areas of improvement in existing systems and suggest innovative solutions. Participate in trainings, workshops, and technology discussions to enhance skill sets. Ensure adherence to security best practices, including data encryption, authentication, and authorization. Follow HIPAA, GDPR, and other relevant privacy regulations to protect sensitive data. Implement role-based access control (RBAC) and secure API authentication mechanisms. Manage multiple ongoing projects and tasks concurrently, ensuring deadlines are met. Work with stakeholders to gather requirements, provide technical recommendations, and support decision-making. Contribute to documentation, process improvements, and internal knowledge sharing initiatives. QUALIFICATIONS: Education / Experience: BE/BTECH, ME/MTECH, MBA, MCA or related field, or an equivalent amount of related work experience is required 4+ years of experience in full-stack application development using ASP.NET Core, C#, and React.js. Strong expertise in SQL Server, including stored procedures, indexing, query optimization, and database architecture. Experience in front-end technologies such as React.js, JavaScript (ES6+), HTML, CSS, and JQuery. Proven experience in developing RESTful APIs, microservices, and service-oriented architectures (SOA). Hands-on experience with Git and version control systems to manage source code and collaborate effectively. Familiarity with Agile methodologies (Scrum, Kanban) and DevOps practices to ensure efficient software development cycles. Experience with cloud services (AWS, Azure, or GCP) is a plus. Technical Skills Backend Development: ASP.NET Core, C#, VB.Net, Web API Development, Microservices. Frontend Development: React.js, JavaScript, HTML5, CSS3, JQuery, Bootstrap. Database Management: SQL Server, Query Optimization, Indexing, Stored Procedures, Transactions. Version Control & CI/CD: GitHub, GitLab, Jenkins, Azure DevOps, CI/CD Pipelines. Performance Optimization: Asynchronous Processing (TPL), Multithreading, Message Queues. Security & Compliance: OAuth, JWT, API Security, Role-Based Access Control (RBAC), Encryption. Testing & Debugging: Unit Testing, Integration Testing, Automated Testing Frameworks (e.g., NUnit, Moq). DevOps & Cloud Technologies: Docker, Kubernetes, Terraform, Cloud Deployment (AWS/Azure). Preferred Experience Knowledge of healthcare and/or insurance industry applications is a plus. Experience in ETL processes, data integration, and workflow automation. Familiarity with BI tools like Power BI, Tableau, or SSRS for reporting and visualization. Basic understanding of accounting and financial reporting concepts is advantageous. Soft Skills & Abilities Strong problem-solving skills with the ability to analyze complex technical issues and develop effective solutions. Self-motivated, proactive, and capable of working independently with minimal supervision. Ability to manage multiple projects under strict deadlines and prioritize work efficiently. Excellent written and verbal communication skills, with the ability to explain technical concepts to non-technical stakeholders. Highly organized, detail-oriented, and capable of working in a collaborative team environment. Adaptability to new technologies and willingness to engage in continuous learning and skill development. Additional Information Shift timings: Flexible to work in night shifts (US Time zone) Experience Level: 3 - 5 years Mode of working: Work from office Line of Service: Advisory Designation: Associate Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Experience in AWS SageMaker development, pipelines, real-time and batch transform jobs Expertise in AWS, Terraform / Cloud formation for IAC Experience in AWS networking concepts Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
ADP is hiring Site Reliability Engineer In ADP, we’re building the next generation of technologies. Our mission is simple: Create powerful solutions that are efficient, intuitive, beautiful, and responsive. As a Site Reliability Engineer, you are responsible for availability, performance, efficiency, change management, monitoring, emergency response, and capacity planning. He or She will be responsible to deliver automations which makes the MNC systems and platforms more reliable and efficient resulting in the Improved Client Experience. What You’ll Do Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation and refinement. Support services before they go live through activities such as system design consulting, developing software platforms and frameworks, capacity planning and launch reviews. Maintain services once they are live by measuring and monitoring availability, latency and overall system health. May work together with other staff engaged in similar functions. Building software to improve DevOps, ITOps, and support processes. Qualifications You’ll Need Education: Bachelor’s degree (Mandatory) preferably in Computer Science or Information Technology Experience Overall 3+ years in Devops. Experience working under a Scrum methodology. Ability to analyze and resolve problems through effective customer interface and communication. Ability to prioritize workload. Deep knowledge of version control. CI/CD implementation expertise. Good knowledge on cloud native applications (AWS). Experience on infrastructure as a code (preferable CloudFormation and/or Ansible/ Terraform). Familiar with programming languages like Phyton and PowerShell. Windows technologies, Networking and Security knowledge. A little about ADP: We are a comprehensive global provider of cloud-based human capital management (HCM) solutions that unite HR, payroll, talent, time, tax and benefits administration and a leader in business outsourcing services, analytics, and compliance expertise. We believe our people make all the difference in cultivating a down-to-earth culture that embraces our core values, welcomes ideas, encourages innovation, and values belonging. We've received recognition for our work by many esteemed organizations, learn more at ADP Awards and Recognition. Diversity, Equity, Inclusion & Equal Employment Opportunity at ADP: ADP is committed to an inclusive, diverse and equitable workplace, and is further committed to providing equal employment opportunities regardless of any protected characteristic including: race, color, genetic information, creed, national origin, religion, sex, affectional or sexual orientation, gender identity or expression, lawful alien status, ancestry, age, marital status, protected veteran status or disability. Hiring decisions are based upon ADP’s operating needs, and applicant merit including, but not limited to, qualifications, experience, ability, availability, cooperation, and job performance. Ethics at ADP: ADP has a long, proud history of conducting business with the highest ethical standards and full compliance with all applicable laws. We also expect our people to uphold our values with the highest level of integrity and behave in a manner that fosters an honest and respectful workplace. Click https://jobs.adp.com/life-at-adp/ to learn more about ADP’s culture and our full set of values. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a highly experienced Azure Engineer with a strong foundation in Python scripting , test-driven development (TDD) using PyTest , and end-to-end cloud automation. A key requirement for this role is hands-on experience with Zerto , specifically in the context of cloud migrations and disaster recovery planning. The ideal candidate will also be well-versed in Infrastructure as Code (IaC) using Terraform and Ansible , and have deep operational knowledge of Microsoft Azure services across compute, networking, containers, and monitoring. Roles And Responsibilities Azure Infrastructure Engineering: Architect, deploy, and manage robust Azure environments using services including: Networking: VNet, Subnet, Private Endpoints, VPN Gateway, ExpressRoute, Route Tables, Azure Firewall Compute & Containers: Azure VMs, Azure Kubernetes Service (AKS), Azure Container Apps, Azure Container Registry (ACR) Platform Services: Azure Web Apps, Azure Functions, Azure Automation Monitoring & Logging: Azure Monitor, Application Insights Python Automation & Testing: Develop scalable, testable Python scripts for cloud automation and integrations. Implement test-driven development (TDD) using PyTest to validate automation workflows, infrastructure logic, and monitoring pipelines. Infrastructure as Code (IaC): Automate infrastructure provisioning using Terraform and Ansible. Build reusable, parameterized modules aligned with best practices for repeatable, secure deployments. Zerto Implementation & DR Strategy: Lead Zerto-based migration and disaster recovery implementations between on-premises and Azure. Optimize replication, orchestration, and failover strategies using Zerto in hybrid or multi-cloud environments. CI/CD & DevOps Integration: Integrate IaC and automation into Git-based pipelines. Design and support efficient CI/CD workflows that promote velocity, compliance, and observability. Mandatory Skills Deep hands-on expertise with Microsoft Azure cloud services Proficiency in Python with real-world experience in test-driven development using PyTest Strong experience with Zerto for cloud migration, backup, and DR orchestration Infrastructure automation using Terraform and Ansible Solid understanding of Git, version control workflows, and DevOps tooling Strong grasp of Azure networking, compute, and container-based architectures Qualifications Bachelor’s degree in Computer Science, Information Technology, or equivalent Microsoft Azure Certifications (e.g., AZ-104, AZ-400, AZ-305) Familiarity with Agile methodologies and enterprise IT operations Experience with cloud security, RBAC, policies, and compliance frameworks Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Company : Monocept (www.monocept.com) is a India based Software Development Company, formed with the objective of solving complex problems and delivering engineering excellence by way of developing excellent Software Applications. Monocept stands for singularity & focus, represented by one while the problem is represented by the variable X which conveys that we are constantly looking to find and solve problems. The name also defines our internal culture where every employee focuses on finding and solving for X. From our offices spread across Hyderabad, Gurugram, Mumbai and New York, we're solving some of the most complexes of the problems for our insurance, media, and e-commerce clients. We offer an excellent open work culture and career advancement opportunities to all employees in a dynamic, growing environment. Job Summary We are looking for a Tech Lead/Senior Tech Lead – Java to drive the architecture, design, and development of scalable, high-performance applications. The ideal candidate will have expertise in Java, Spring Boot, Microservices, and AWS and be capable of leading a team of engineers in building enterprise-grade solutions. Key Responsibilities Lead the design and development of complex, scalable, and high-performance Java applications. Architect and implement Microservices-based solutions using Spring Boot. Optimize and enhance existing applications for performance, scalability, and reliability. Provide technical leadership, mentoring, and guidance to the development team. Work closely with cross-functional teams, including Product Management, DevOps, and QA, to deliver high-quality software. Ensure best practices in coding, testing, security, and deployment. Design and implement cloud-native applications using AWS services such as EC2, Lambda, S3, RDS, API Gateway, and Kubernetes. Troubleshoot and resolve technical issues and system bottlenecks. Stay up-to-date with the latest technologies and drive innovation within the team. Required Skills & Qualifications 8+ years of experience in Java development. Strong expertise in Spring Boot, Spring Cloud, and Microservices architecture. Hands-on experience with RESTful APIs, event-driven architecture, and messaging systems (Kafka, RabbitMQ, etc.). Deep understanding of database technologies such as MySQL, PostgreSQL, or NoSQL (MongoDB, DynamoDB, etc.). Experience with CI/CD pipelines and DevOps tools (Jenkins, Docker, Kubernetes, Terraform, etc.). Proficiency in AWS cloud services and infrastructure.Strong knowledge of security best practices, performance tuning, and monitoring. Excellent problem-solving skills and ability to work in an Agile environment. Strong communication and leadership skills. If you’re a passionate and results-driven technical leader looking to work on cutting-edge technologies, we’d love to hear from you! Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Who we are: Welcome to Archipelago, where we're redefining the landscape of commercial property insurance with our AI-powered data network. We believe in the power of accurate information to drive meaningful business decisions and offer solutions to generate that accurate information as easily and efficiently as possible. By connecting brokers, owners, and insurers, we empower our customers to navigate the complexities of the property insurance and risk management processes with confidence. Archipelago was founded in 2018 and serves many of the world’s largest property brokers and their clients, representing over 500 of the worlds largest and most dynamic commercial property portfolios to improve their data and better represent their risks. Archipelago has achieved Series B funding from industry-leading investment partners including Scale, Canaan Partners, Ignition Partners, Prologis Ventures, Stone Point Capital, and Zigg Capital. Join us at Archipelago and be part of a team dedicated to transforming the commercial property insurance industry. We're seeking individuals with a passion for innovation, a commitment to excellence, and a drive to further elevate and empower our customers. If you're ready to make a meaningful impact and be part of a dynamic, forward-thinking company, we invite you to explore our job opportunities and join us on our journey to keep data accurate and workflows seamless. NOTE: Preferred candidates to be located in Noida, India. Who you are: Archipelago is seeking a DevOps Engineer based in India to join our growing team. We are looking for an experienced devops / infrastructure engineer comfortable designing, developing, and maintaining AWS cloud infrastructure for a SAAS business in the insurance tech space. As a 5 year old company we are still very actively evolving our infrastructure architecture and so the role requires a significant amount of architecture and collaboration with the engineering team (currently about 20 and rapidly growing). Qualifications: Strong knowledge of the AWS stack, especially Serverless deployments (ECS, Lambdas, EKS) Proficient with Terraform. Advanced ability to craft clear and concise documentation Understanding of deployment orchestration using Docker, ECS, or Kubernetes. You have proficiency with shell scripting and one or more scripting / automation languages (e.g. Python, Ansible, etc.). Ability to design and manage CI / CD pipelines (e.g. GitHub Actions, AWS CodeBuild) Ability to work as part of a team but also focus and execute on your deliverables. Bonus Points: Experience operating React, Go, Python, and Postgres stacks. Experience building and running applications for enterprise regulated customers. Benefits: We offer excellent benefits regardless of where you are in your career. We believe that providing our employees with the means to lead healthy balanced lives results in the best possible work performance. *All benefits are subject to change at management’s discretion. Show more Show less
Posted 5 days ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 7+ years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Open Position Join us as Cloud Engineer at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: · Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. · Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. · Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. · Ensure version control, repeatability, and compliance across all infrastructure components. · Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. · Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities Infrastructure Management Design and manage Azure-based infrastructure for scalable and resilient applications. Implement and manage Azure Container Apps to support microservices-based architecture. CI/CD Pipelines Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. Automate deployment workflows to ensure quick and reliable application delivery. Version Control and Collaboration Manage GitHub repositories, branching strategies, and pull request workflows. Ensure repository compliance and enforce best practices for source control. Platform Automation Develop scripts and tooling to automate repetitive tasks and improve efficiency. Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. Monitoring and Optimization Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. Analyze performance metrics and implement optimizations for cost and efficiency improvements. Collaboration and Support Work closely with development, DevOps, and QA teams to streamline deployment processes. Troubleshoot and resolve issues in production and non-production environments. GitHub Management Manage GitHub repositories, including permissions, branch policies, and pull request workflows. Implement GitHub Actions for automated testing, builds, and deployments. Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). Design and implement branching strategies to support collaborative software development. Maintain GitHub templates for issues, pull requests, and contributing guidelines. Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. Operational Support Maintain pipeline health and resolve incidents related to deployment and infrastructure. Address defects, validate certificates, and ensure platform consistency. Resolve issues with offline services, manage private runners, and apply security patches. Monitor page performance using tools like Lighthouse. Manage server maintenance, repository infrastructure, and access control. Pipeline Development Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. Implement branching and versioning management strategies. Identify pipeline failures and develop automated recovery mechanisms. Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). Testing Integration Implement automated testing, feedback loops, and quality gates. Manage SonarQube configurations, rulesets, and runner maintenance. Maintain SonarQube EE deployment in Azure Container Apps. Configure and integrate security tools like Dependabot and Snyk with Jira. Work Collaboration Integration Integrate JIRA for automatic ticket generation, story validation, and release management. Configure Teams for API management, channels, and chat management. Set up email alerting mechanisms. Support IFS/CR process integration. Required Skills & Qualifications Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). CI/CD Tools: GitHub Actions, Terraform, Bicep. Version Control: GitHub repository management, branching strategies, pull request workflows. Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. Automation & Scripting: Terraform, Bicep, Shell scripting. Monitoring & Performance: Azure Monitor, Lighthouse. Testing & Quality Assurance: SonarQube, Automated testing. Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications Experience in microservices architecture and containerized applications. Strong understanding of DevOps methodologies and best practices. Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Skills: devops,dependabot,azure container apps,github actions,application insights,azure,jira,pull request workflows,branching strategies,github advanced security,teams,email alerting,shell scripting,ci,github repository management,terraform,docker,cd,snyk,sonarqube,bicep,azure monitor Show more Show less
Posted 5 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities Infrastructure Management Design and manage Azure-based infrastructure for scalable and resilient applications. Implement and manage Azure Container Apps to support microservices-based architecture. CI/CD Pipelines Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. Automate deployment workflows to ensure quick and reliable application delivery. Version Control and Collaboration Manage GitHub repositories, branching strategies, and pull request workflows. Ensure repository compliance and enforce best practices for source control. Platform Automation Develop scripts and tooling to automate repetitive tasks and improve efficiency. Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. Monitoring and Optimization Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. Analyze performance metrics and implement optimizations for cost and efficiency improvements. Collaboration and Support Work closely with development, DevOps, and QA teams to streamline deployment processes. Troubleshoot and resolve issues in production and non-production environments. GitHub Management Manage GitHub repositories, including permissions, branch policies, and pull request workflows. Implement GitHub Actions for automated testing, builds, and deployments. Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). Design and implement branching strategies to support collaborative software development. Maintain GitHub templates for issues, pull requests, and contributing guidelines. Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. Operational Support Maintain pipeline health and resolve incidents related to deployment and infrastructure. Address defects, validate certificates, and ensure platform consistency. Resolve issues with offline services, manage private runners, and apply security patches. Monitor page performance using tools like Lighthouse. Manage server maintenance, repository infrastructure, and access control. Pipeline Development Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. Implement branching and versioning management strategies. Identify pipeline failures and develop automated recovery mechanisms. Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). Testing Integration Implement automated testing, feedback loops, and quality gates. Manage SonarQube configurations, rulesets, and runner maintenance. Maintain SonarQube EE deployment in Azure Container Apps. Configure and integrate security tools like Dependabot and Snyk with Jira. Work Collaboration Integration Integrate JIRA for automatic ticket generation, story validation, and release management. Configure Teams for API management, channels, and chat management. Set up email alerting mechanisms. Support IFS/CR process integration. Required Skills & Qualifications Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). CI/CD Tools: GitHub Actions, Terraform, Bicep. Version Control: GitHub repository management, branching strategies, pull request workflows. Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. Automation & Scripting: Terraform, Bicep, Shell scripting. Monitoring & Performance: Azure Monitor, Lighthouse. Testing & Quality Assurance: SonarQube, Automated testing. Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications Experience in microservices architecture and containerized applications. Strong understanding of DevOps methodologies and best practices. Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Skills: devops,dependabot,azure container apps,github actions,application insights,azure,jira,pull request workflows,branching strategies,github advanced security,teams,email alerting,shell scripting,ci,github repository management,terraform,docker,cd,snyk,sonarqube,bicep,azure monitor Show more Show less
Posted 5 days ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Since its inception in 2003, driven by visionary college students transforming online rent payment, Entrata has evolved into a global leader serving property owners, managers, and residents. Honored with prestigious awards like the Utah Business Fast 50, Silicon Slopes Hall of Fame - Software Company - 2022, Women Tech Council Shatter List, our comprehensive software suite spans rent payments, insurance, leasing, maintenance, marketing, and communication tools, reshaping property management worldwide. Our 2200+ global team members embody intelligence and adaptability, engaging actively from top executives to part-time employees. With offices across Utah, Texas, India, Israel, and the Netherlands, Entrata blends startup innovation with established stability, evident in our transparent communication values and executive town halls. Our product isn't just desirable; it's industry essential. At Entrata, we passionately refine living experiences, uphold collective excellence, embrace > Job Summary Entrata Software is seeking a DevOps Engineer to join our R&D team in Pune, India. This role will focus on automating infrastructure, streamlining CI/CD pipelines, and optimizing cloud-based deployments to improve software delivery and system reliability. The ideal candidate will have expertise in Kubernetes, AWS, Terraform, and automation tools to enhance scalability, security, and observability. Success in this role requires strong problem-solving skills, collaboration with development and security teams, and a commitment to continuous improvement. If you thrive in fast-paced, Agile environments and enjoy solving complex infrastructure challenges, we encourage you to apply! Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and ArgoCD to enable seamless, automated software deployments. Deploy, manage, and optimize Kubernetes clusters in AWS, ensuring reliability, scalability, and security. Automate infrastructure provisioning and configuration using Terraform, CloudFormation, Ansible, and scripting languages like Bash, Python, and PHP. Monitor and enhance system observability using Prometheus, Grafana, and ELK Stack to ensure proactive issue detection and resolution. Implement DevSecOps best practices by integrating security scanning, compliance automation, and vulnerability management into CI/CD workflows. Troubleshoot and resolve cloud infrastructure, networking, and deployment issues in a timely and efficient manner. Collaborate with development, security, and IT teams to align DevOps practices with business and engineering objectives. Optimize AWS cloud resource utilization and cost while maintaining high availability and performance. Establish and maintain disaster recovery and high-availability strategies to ensure system resilience. Improve incident response and on-call processes by following SRE principles and automating issue resolution. Promote a culture of automation and continuous improvement, identifying and eliminating manual inefficiencies in development and operations. Stay up-to-date with emerging DevOps tools and trends, implementing best practices to enhance processes and technologies. Ensure compliance with security and industry standards, enforcing governance policies across cloud infrastructure. Support developer productivity by providing self-service infrastructure and deployment automation to accelerate the software development lifecycle. Document processes, best practices, and troubleshooting guides to ensure clear knowledge sharing across teams. Minimum Qualifications 3+ years of experience as a DevOps Engineer or similar role. Strong proficiency in Kubernetes, Docker, and AWS. Hands-on experience with Terraform, CloudFormation, and CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD). Solid scripting and automation skills with Bash, Python, PHP, or Ansible. Expertise in monitoring and logging tools such as NewRelic, Prometheus, Grafana, and ELK Stack. Understanding of DevSecOps principles, security best practices, and vulnerability management. Strong problem-solving skills and ability to troubleshoot cloud infrastructure and deployment issues effectively. Preferred Qualifications Experience with GitOps methodologies using ArgoCD or Flux. Familiarity with SRE principles and managing incident response for high-availability applications. Knowledge of serverless architectures and AWS cost optimization strategies. Hands-on experience with compliance and governance automation for cloud security. Previous experience working in Agile, fast-paced environments with a focus on DevOps transformation. Strong communication skills and ability to mentor junior engineers on DevOps best practices. If you're passionate about automation, cloud infrastructure, and building scalable DevOps solutions ,
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Openshift / Kubernetes administrator with 06-10 years of experience in OpenShift Platform , Kubernetes (K8), EKS, AKS,GKE etc. Person will primarily focus on administering and supporting the OpenShift Container Platform ecosystem. This includes managing container tenant provisioning, isolation, and capacity. Person will work directly with infrastructure as code-based automation to manage the capacity of the overall platform and deliver new capacity and capabilities as necessary. Having a strong background in Linux administration, virtualization, networking, and security will be required in successfully fulfilling this role; and in providing first class level 3 support. Additionally, understanding application development lifecycles as well practical experience in working with continuous integration and continuous deployment tools as part of the container lifecycle will be useful. Need good experience in Devops tools like CI/CD, Jenkins, GitLab , AirFlow etc. Development of System Life cycle to manage and operate the Infrastructure. Should have depth knowledge for automation tools like Terraform , Ansible , StackStorm etc. Good Knowledge of open APIs and Integrate with Orchestration Tool [CMP] Show more Show less
Posted 5 days ago
16.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, build and maintain CI/CD pipelines for various applications and services Collaborate with cross-functional teams to identify and resolve infrastructure and deployment issues Implement and maintain cloud-based infrastructure using Azure Ensure the security of the infrastructure and applications by implementing security best practices and tools Automate infrastructure and deployment processes using tools like Ansible, Terraform, or CloudFormation Basic, structured, standard approach to work Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B. Tech or MCA or MSc or MTech (16+ years of formal education, Correspondence courses are not relevant) Experience in continuous integration, delivery, and deployment (CI or CD) Experience in container orchestration using Kubernetes or Docker Swarm Experience in implementing security best practices and tools Solid experience in DevOps Solid understanding of cloud-based infrastructure using Azure At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 5 days ago
3.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Project description As a DevOps Operational Support for this project, you will have the opportunity to contribute to the data management architecture of industry leading software. You will work closely with cross-functional teams and regional experts to design, implement, and support solutions with a focus on data security and global availability to facilitate data-driven decisions for our customers. This is your chance to work on a stable, long-term project with a global client, focusing on digital transformation and change management. Why Join Us - Exciting OpportunitiesWork in squads under our customer's direction, utilizing Agile methodologies and Scrum. - Innovative ApplicationContribute to an application that guides and documents the sales order process, aids in market analysis, and ensures competitive pricing. - Workflow GovernanceBe part of a team that integrates digital and human approvals, ensuring seamless integration with a broader ecosystem of applications. - Global ExposureCollaborate with reputed global clients, delivering top-notch solutions. - Career GrowthJoin high-caliber project teams with front-end, back-end, and database developers, offering ample opportunities to learn, grow, and advance your career. If you have strong technical skills, effective communication abilities, and a commitment to security, we want you on our team! Ready to make an impactApply now and be part of our journey to success! Responsibilities 1.Solve Operational ChallengesWork with global teams to find creative solutions for customers across our software catalog 2.Customer DeploymentsPlan, provision, and configure enterprise-level solutions for customers on a global scale. 3.Monitoring and TroubleshootingMonitor customer environments to proactively identify and resolve issues while providing support for incidents. 4.AutomationLeverage and maintain automation pipelines to handle all stages of the software lifecycle. 5.DocumentationWrite and maintain documentation for processes, configurations, and procedures. 6.Meet SRE & MTTR GoalsLead the team in troubleshooting environment failures within SRE MTTR goals. 7.Collaborate and DefineWork closely with stakeholders to define project requirements and deliverables and understand their needs and challenges. 8.Implement Best PracticesEnsure the highest standards in coding and security, with a strong emphasis on protecting systems and data. 9.Strategize and PlanTake an active role in defect triage, strategy, and architecture planning. 10. Maintain PerformanceEnsure database performance and resolve development problems. 11.Deliver QualityTranslate requirements into high-quality solutions, adhering to Agile methodologies. 12.Review and ValidateConduct detailed design reviews to ensure alignment with approved architecture. 13.CollaborateWork with application development teams throughout development, deployment, and support phases. Skills Must have Technical Skills: Database technologiesRDBMS (Postgres preferred), no-SQL (Cassandra preferred) Software languagesJava, Python, NodeJS, Angular Cloud PlatformsAWS Cloud Managed ServicesMessaging, Server-less Computing, Blob Storage Provisioning (Terraform, Helm), Containerization (Docker, Kubernetes preferred) Version ControlGit. Qualification and Soft Skills: Bachelors degree in Computer Science, Software Engineering, or a related field Customer-driven and result-oriented focus. Excellent problem-solving and troubleshooting skills. Ability to work independently and as part of a team. Strong communication and collaboration skills. Strong desire to stay up to date with the latest trends and technologies in the field. Nice to have Cloud TechnologiesRDS, Azure Knowledge in the E&P Domain (Geology, Geophysics, Well, Seismic or Production data types) GIS experience is desirable OtherLanguagesEnglishC2 Proficient SeniorityRegular
Posted 5 days ago
4.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
As a key member of our dynamic team, you will play a vital role in crafting exceptional software experiences. Your responsibilities will encompass the design and implementation of innovative features, fine-tuning and sustaining existing code for optimal performance, and guaranteeing top-notch quality through rigorous testing and debugging. Collaboration is at the heart of what we do, and you’ll be working closely with fellow developers, designers, and product managers to ensure our software aligns seamlessly with user expectations. As a Cloud Platform Developer specializing in Infrastructure as Code, you will: Design and automate deployable architectures for IBM Cloud resources using Infrastructure as Code (IaC) tools such as Terraform, Ansible, Go, HCL or similar technologies. Develop reusable automation modules and templates to enable consistent, scalable, and secure cloud deployments. Collaborate with architects and offering managers to translate solution designs into automated infrastructure blueprints. Ensure compliance and security by embedding best practices and governance policies into infrastructure code. Participate in design reviews and technical discussions , presenting infrastructure solutions and automation strategies to engineering and architecture teams. Own and drive infrastructure automation projects , adapting to varying scopes and timelines based on business needs. Write and maintain test cases (unit, integration, and functional) to validate infrastructure deployments and ensure reliability. Continuously improve deployment pipelines and contribute to the evolution of cloud platform engineering practices. Document deployable architectures and automation modules in a clear, concise, and user-friendly manner to help internal teams and consumers effectively adopt and integrate them. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4+ years of experience in software development or engineering, with a strong foundation in data structures and algorithms . 2+ years of hands-on experience designing and developing cloud-native architectures and working with IBM Cloud or other major cloud platforms (AWS, Azure, GCP). Proven expertise in Infrastructure as Code (IaC) using tools such as Terraform , Ansible , or similar automation frameworks. 3+ years of experience in Golang or a related programming language, with a solid understanding of RESTful API design , microservices , and ORM concepts . Experience developing and maintaining REST APIs using Golang and/or Python . Strong understanding of containerization and orchestration technologies , with 2+ years of experience using Docker and Kubernetes . Proficiency in using version control systems , preferably Git . Demonstrated ability to troubleshoot, debug, and optimize infrastructure and application code. Excellent verbal and written communication skills , with the ability to document deployable architectures and automation modules clearly for internal and external consumption. Experience working in agile development environments , collaborating across cross-functional teams. Preferred technical and professional experience Experience with message queuing systems such as Kafka or RabbitMQ for building scalable, event-driven architectures. Familiarity with relational databases , preferably PostgreSQL , and caching solutions like Redis . Exposure to CI/CD pipelines and tools such as Jenkins, GitHub Actions, or Tekton for automating build and deployment workflows. Hands-on experience with test automation frameworks to ensure infrastructure and application reliability. Proficiency in HTML, JavaScript , or other front-end technologies is a plus for working with UI-related infrastructure components. Strong background in Infrastructure as Code (IaC) using Terraform , Ansible , or similar tools. Experience deploying and managing applications in a cloud-native environment , with a focus on scalability, availability, and performance. Familiarity with cloud-native monitoring and alerting tools such as Prometheus , Grafana , or Elasticsearch for observability and operational insights.
Posted 5 days ago
6.0 - 9.0 years
7 - 11 Lacs
Hyderabad
Work from Office
As a Senior DevOps Engineer, you will be responsible for enhancing and integrating DevOps practices into our development and operational processes. You will work collaboratively with software development, quality assurance, and IT operations teams to implement CI/CD pipelines, automate workflows, and improve the deployment processes to ensure high-quality software delivery. Key Responsibilities Design and implement CI/CD pipelines for automation of build, test, and deployment processes. Collaborate with development and operations teams to improve existing DevOps practices and workflows. Deploy and manage container orchestration platforms such as Kubernetes and Docker. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Participate in incident response and root cause analysis activities. Establish best practices for DevOps processes, security, and compliance. Qualifications and Experience Bachelor's degree with DevOps certification 7+ years of experience in a DevOps or related role. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI. Developemnt (JAVA or Python ..etc) - Advanced Kubernetes usage and admin - Advanced AI - Intermediate CICD development - Advanced Strong collaboration and communication skills.
Posted 5 days ago
3.0 - 8.0 years
6 - 11 Lacs
Bengaluru
Work from Office
About the team - Engineering at HashiCorp, an IBM Company (HashiCorp) On the HashiCorp engineering team, we build the Infrastructure Cloud which allows enterprises to take a unified approach to Infrastructure and Security Lifecycle Management: Infrastructure Lifecycle ManagementBuild / Deploy / Manage Terraform allows you to use infrastructure as code to provision and manage any infrastructure across your organization. Packer standardizes image workflows across cloud providers, allowing teams to build, govern and manage any image for any cloud. Waypoint makes infrastructure easily accessible at scale, enabling platform teams to deliver golden patterns and workflows with an internal developer platform. Nomad brings modern application scheduling to any type of software, allowing you to manage containers, binaries and VMs efficiently in the cloud, on-premises and across edge environments. Security Lifecycle ManagementProtect / Inspect / Connect Vault provides organizations with identity-based security to automatically authenticate and authorize access to secrets and other sensitive data. Boundary standardizes secure remote access across dynamic environments, allowing organizations to connect users and manage access with identity-based security controls. Consul standardizes service networking, allowing you to discover and securely connect any service across any runtime with identity-based service networking. We deliver the Infrastructure Cloud through an enterprise-grade unified SaaS platform,HCP, as well as to enterprises through self-managed/on-premises options. Across product engineering and platform engineering teams, we are looking for great engineers to come join us in developing the Infrastructure Cloud! What you’ll do (responsibilities) We’re looking for Mid-Level Engineers with a deep backend focus to join our team. In this role, you can expect to: Design, prototype and implement features and tools while ensuring stability and usability Collaborate closely with Product Design and Product Management partners, as well as engineers on your team and others Follow through on assigned tasks to build and ship medium-sized features, managing task expectations as needed. Engage in team discussions around diagnosis, planning, and workflow improvements based on product requirements. Apply independent judgment within team practices to determine appropriate actions and solutions. Address unforeseen challenges, making recommendations to keep tasks on track. Debug and resolve medium-level bugs in products or solutions to maintain quality. Review technical contributions for quality and consistency, collaborating with stakeholders to resolve issues and recommend technical or architectural changes. Suggest improvements to current processes and propose solutions to enhance the efficiency of architectural components and design. Participate in on-call rotations, pairing, and team planning to support product needs. Engage in team discussions around diagnosis, planning, and workflow improvements based on product requirements. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise You have at least 3+ years of experience as an engineer You have professional experience developing with modern programming languages and frameworks, and are interested in working in Golang and Ruby specifically You have experience working with distributed systems, particularly on a cloud provider such as AWS, Azure or GCP, with a focus on scalability, resilience and security. Experience in reviewing & refactoring code & making suggestions that improve the codebase and product Writing tests for more complex or edge cases Demonstrated ability to build trust and foster relationships across teams and stakeholders, with a focus on valuing diverse perspectives and proficiently managing expectations Cloud-native mindset and solid understanding of DevOps principles in a cloud environment Emerging experience in mentoring team members, helping to enhance their problem-solving, critical thinking, and planning skills. Proven decision-making abilities with an intentional, data-driven approach to solving complex technical challenges and delivering results Strong customer focus and systems-thinking mindset, with a commitment to personal accountability, self-awareness, and continuous improvement in support of high-quality outcomes Preferred technical and professional experience You have experience using HashiCorp products (Terraform, Packer, Waypoint, Nomad, Vault, Boundary, Consul). You have prior experience working in cloud platform engineering teams
Posted 5 days ago
2.0 - 5.0 years
6 - 11 Lacs
Bengaluru
Work from Office
About the team - Engineering at HashiCorp On the HashiCorp engineering team, we build the Infrastructure Cloud which allows enterprises to take a unified approach to Infrastructure and Security Lifecycle Management: Infrastructure Lifecycle ManagementBuild / Deploy / Manage Terraform allows you to use infrastructure as code to provision and manage any infrastructure across your organization. Packer standardizes image workflows across cloud providers, allowing teams to build, govern and manage any image for any cloud. Waypoint makes infrastructure easily accessible at scale, enabling platform teams to deliver golden patterns and workflows with an internal developer platform. Nomad brings modern application scheduling to any type of software, allowing you to manage containers, binaries and VMs efficiently in the cloud, on-premises and across edge environments. Security Lifecycle ManagementProtect / Inspect / Connect Vault provides organizations with identity-based security to automatically authenticate and authorize access to secrets and other sensitive data. Boundary standardizes secure remote access across dynamic environments, allowing organizations to connect users and manage access with identity-based security controls. Consul standardizes service networking, allowing you to discover and securely connect any service across any runtime with identity-based service networking. We deliver the Infrastructure Cloud through an enterprise-grade unified SaaS platform, HCP, as well as to enterprises through self-managed/on-premises options. Across product engineering and platform engineering teams, we are looking for great engineers to come join us in developing the Infrastructure Cloud! What you’ll do (responsibilities) We’re looking for Senior Engineers with a deep backend focus to join our team. In this role, you can expect to: Design, prototype and implement features and tools while ensuring stability and usability Collaborate closely with Product Design and Product Management partners, as well as engineers on your team and others Act as a subject matter expert on quality development with an emphasis on Golang development Lead and execute large-scale projects, ensuring the reliable delivery of key features from design through full implementation and troubleshooting. Drive end-to-end project lifecycle, including architecture design, implementation, and issue resolution, with a focus on quality and efficiency. Evaluate project tradeoffs and propose solutions, proactively removing blockers and keeping stakeholders informed on progress, issues, and milestones. Collaborate with internal teams, customers, and external stakeholders to design solutions that align with requirements and customer needs. Advocate for strategic technical roadmap initiatives that enhance the system’s overall effectiveness across teams and the organization. Debug and resolve complex issues to improve the quality and stability of products or solutions Review and assess code for quality, design patterns, and optimization opportunities, ensuring best practices are followed Mentor and guide software engineers, sharing technical knowledge and promoting best practices in development processes Facilitate collaborative team activities, such as code pairing and group troubleshooting, to foster a productive and cohesive team environment Support reliable production environments, including participating in an oncall rotation Strive for quality through maintainable code and comprehensive testing from development to deployment Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Typically, you should have at least 3 or more years of experience as an engineer You have professional experience developing with modern programming languages and frameworks, and are interested in working in Golang and Ruby specifically You have experience working with distributed systems, particularly on a cloud provider such as AWS, Azure or GCP, with a focus on scalability, resilience and security. Emerging ability to direct work and influence others, with a strategic approach to problem-solving and decision-making in a collaborative environment Demonstrated business acumen and customer focus, with a readiness for change and adaptability in dynamic situations Cloud-native mindset and solid understanding of DevOps principles in a cloud environment Familiarity with cloud monitoring tools to implement robust observability practices that prioritize metrics, logging and tracing for high reliability and performance. Intentional focus on stakeholder management and effective communication, fostering trust and relationship-building across diverse teams Integrated skills in critical thinking and data-driven analysis, promoting a growth mindset and continuous improvement to support high-quality outcomes Preferred technical and professional experience You have experience using HashiCorp products (Terraform, Packer, Waypoint, Nomad, Vault, Boundary, Consul). You have prior experience working in cloud platform engineering teams
Posted 5 days ago
4.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
DevOps Required Skills (Must Have and should meet all the below standards for qualifying to this role) Strong knowledge of Python (developer), GIT Docker, Kubernetes Desired Skills (Good to have as value add to this role) Agile Practice Ansible/Terraform, Yaml CICD knowledge on Unix/Linux systems and Unix scripting Exposure to Network Stack Exposure to Cloud (AWS/Azure)
Posted 5 days ago
2.0 - 7.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Design and develop innovative, company and industry impacting services using open source and commercial technologies at scale Designing and architecting enterprise solutions to complex problems Presenting technical solutions and designs to engineering team Adhere to compliance requirements and secure engineering best practices Collaboration and review of technical designs with architecture and offering management Taking ownership and keen involvement in projects that vary in size and scope depending on requirements Writing and executing unit, functional, and integration test cases Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated analytical skills and data structures/algorithms fundamentals Demonstrated verbal and written communications skills Demonstrated skills with troubleshooting, debugging, maintaining and improving existing softwar 2+ years overall experience in Development or Engineering experience 2+ years of experience on Cloud architecture and developing Cloud native applications on Cloud 2+ years of experience with Node.js / TypeScript / Python or related programming languag 3+ Experience with RESTful API design, Micro-services, ORM concepts 2+ years of Experience with Docker and Kubernetes and Terraform 2+ years of Experience with DevSecOps Toolin 2+ years of experience with UI E2E tools and experience with Accessibility Experience working with any version control system (Git preferred) Preferred technical and professional experience Experience with Node.js/ TypeScript / Python Experience with CI/CD pipelines Experience with automation of security processes, ensuring efficient and seamless integration with application teams.
Posted 5 days ago
5.0 - 10.0 years
11 - 15 Lacs
Bengaluru
Work from Office
- Manage, optimise and resolve issues observed in the VPC environment through detailed debugging - Manage extremely good timelines for the events and actions taken in our incident records for future reference for improvements - Manage and optimize underlay network infrastructure including routing, switching, and physical connectivity in data centers - Collaborate with cloud architects to troubleshoot and resolve networking issues across the Cloud Infrastructure - Monitor network performance and proactively resolve issues using tools like Splunk, AppNeta or equivalent - Document procedures and call out anomalies when observed during run-book executions - Also improve run-books as and when issues are discovered - Participate in on-call rotations and incident response as needed Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise - Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent experience). - 5+ years of experience in enterprise networking, with a strong focus on both cloud and data center networking environments - Proficiency in overlay technologiesVXLAN, NVGRE, VPC - Strong understanding of underlay technologiesBGP, OSPF, MPLS, Ethernet, Spine-Leaf architectures - Relevant certifications (e.g., CCNP, CCIE, AWS Advanced Networking) are a plus Preferred technical and professional experience - Exposure to container networking (Kubernetes) - Familiarity with network automation tools (e.g., Ansible, Terraform, Python scripting)
Posted 5 days ago
10.0 - 15.0 years
14 - 18 Lacs
Bengaluru
Work from Office
We're seeking a results-driven and collaborative Software Development Manager to lead the design and development of IBM Consulting Advantage Platform. As a management leader, you'll collaborate with peers and stakeholders to ensure business continuity. You'll also be responsible for building and leading an impactful team of Developers & QA engineers, focusing on software developments, productivity improvements and fostering a culture of continuous learning and improvement. In this role, you will be responsible for: Lead a team of engineers to meet release dates along with committed deliverables on-time and with quality Balance priorities and work assignments across team members following agile processes to meet delivery schedules Interface with product management and offering managers to understand customer requirements and business prioritization Drive development activities, monitor progress, collaborate to align dependencies, remove blockers for team members and manage risks Develop and implement effective strategies for software development, testing, and deployment Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 10+ years of professional experience; 5+ years as team lead/manager Excellent organizational skills including attention to details, time management, and multi-tasking skills Hands-on experience Experienced building Microservices & REST APIs using Java, and other related technologies Experience with Front End Development programming languages and design Frameworks Strong project management, organizational, problem-solving, communication, and collaboration skills Preferred technical and professional experience Hands-on experience with SpringBoot, ReactJS, NodeJS etc Experience in working on a production SaaS application with SOC2 certification Knowledge of Containerisation technologies such as Kubernetes & Docker, and CI/CD pipelines such as Tekton, ArgoCD etc.
Posted 5 days ago
3.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
In this Site Reliability Engineer role, you will work closely with entire IBM Cloud organization to maintain and operationally improve the IBM cloud infrastructure. You will focus on the following key responsibilities: Ability to respond promptly to production issues and alerts 24x7 Execute changes in the production environment through automation Implement and automate infrastructure solutions that support IBM Cloud products and services to reduce toil. Partner with other SRE teams and program managers to deliver mission-critical services to IBM Cloud Build new tools to improve automated resolution of production issues Monitor, respond promptly to production alerts, Execute changes in Production through automation Support the compliance and security integrity of the environment Continually improve systems and processes regarding automation and monitoring. Required education Bachelor's Degree Required technical and professional expertise Excellent written and verbal communication skills. Minimum 3+ years experience in handling large production systems environment Must be extremely comfortable using and navigating within a Linux environment Ability to do low level debugging and problem analysis by examining logs and running Unix commands Must be efficient in writing and debugging scripts 3-5+ years of experience in Virtualization Technologies and Automation / Configuration Managements Automation and configuration management tools/solutionsAnsible, Python, bash, Terraform, GoLang etc. (at least one) Virtualization technologiesCitrix Xen Hypervisor (Preferred), KVM(also preferred), libvirt, VMware vSphere, etc. (at least one) Monitoring technologiesZabbix, Sysdig, Grafana, Nagios, Splunk, etc. (at least one) Working knowledge with Container technologiesKubernetes, Docker, etc. Flexibility to work on shifts to handle production systems Preferred technical and professional experience Good experience inPublic cloud platforms,Kubernetes clusters and Strong Linux skills for managing services across microservices platform, good SRE knowledge in Cloud Compute, Storage and Network services.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2