Jobs
Interviews

17543 Terraform Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Location HYDERABAD OFFICE INDIA Job Description We’re looking for a Platform Engineer to join our Data & Analytics team. We are searching for self-motivated candidates, who will play a vital role in enabling self-serve usage of Databricks Unity Catalog features at P&G at scale of 200+ applications. Responsibilities: Conducting analysis and experiments within the Databricks ecosystem. Implementing and maintaining data governance best practices, focusing on data security and access controls. Collaborating with business and semi-technical collaborators to understand requirements and develop solutions using Azure and Databricks. Working closely with Data Engineers to understand their technical needs related to Unity Catalog and propose effective solutions for data processing. Knowing the latest advancements in Databricks and data engineering technologies and testing new Databricks features. Participating in data architecture and design discussions, sharing insights and recommendations. Testing architect-defined patterns and providing feedback based on implementation experiences. Leading Databricks Unity Catalog objects using Terraform. Building solutions in Terraform and Java (APIs) to deploy Delta Sharing and Lakehouse federation at scale. Developing scalable solutions, guidelines, principles, and standard methodologies for multiple clients. Job Qualifications At least 3 years of hands-on experience working with Databricks. Experience implementing projects and solutions in the cloud (Azure preferred). Bachelor's degree or equivalent experience in Computer Science, Data Engineering, or a related field. Experience in Data Engineering: data ingestion, modeling, and pipeline development. Familiarity with data engineering best practices, including query optimization. Proficiency in SQL and Python. Experience with Terraform. Familiarity with Big Data/ETL processes (Apache Spark). Knowledge of crafting and implementing REST APIs. Experience with CI/CD practices and Git. Tight-knit teamwork and interpersonal skills. About Us We produce globally recognized brands and we grow the best business leaders in the industry. With a portfolio of trusted brands as diverse as ours, it is paramount our leaders are able to lead with courage the vast array of brands, categories and functions. We serve consumers around the world with one of the strongest portfolios of trusted, quality, leadership brands, including Always®, Ariel®, Gillette®, Head & Shoulders®, Herbal Essences®, Oral-B®, Pampers®, Pantene®, Tampax® and more. Our community includes operations in approximately 70 countries worldwide. Visit http://www.pg.com to know more. We are an equal opportunity employer and value diversity at our company. We do not discriminate against individuals on the basis of race, color, gender, age, national origin, religion, sexual orientation, gender identity or expression, marital status, citizenship, disability, HIV/AIDS status, or any other legally protected factor. "At P&G, the hiring journey is personalized every step of the way, thereby ensuring equal opportunities for all, with a strong foundation of Ethics & Corporate Responsibility guiding everything we do. All the available job opportunities are posted either on our website - pgcareers.com, or on our official social media pages, for the convenience of prospective candidates, and do not require them to pay any kind of fees towards their application.” Job Schedule Full time Job Number R000134919 Job Segmentation Experienced Professionals (Job Segmentation)

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

GitLab is an open-core software company that develops the most comprehensive AI-powered DevSecOps Platform, used by more than 100,000 organizations. Our mission is to enable everyone to contribute to and co-create the software that powers our world. When everyone can contribute, consumers become contributors, significantly accelerating human progress. Our platform unites teams and organizations, breaking down barriers and redefining what's possible in software development. Thanks to products like Duo Enterprise and Duo Agent Platform, customers get AI benefits at every stage of the SDLC. The same principles built into our products are reflected in how our team works: we embrace AI as a core productivity multiplier, with all team members expected to incorporate AI into their daily workflows to drive efficiency, innovation, and impact. GitLab is where careers accelerate, innovation flourishes, and every voice is valued. Our high-performance culture is driven by our values and continuous knowledge exchange, enabling our team members to reach their full potential while collaborating with industry leaders to solve complex problems. Co-create the future with us as we build technology that transforms how the world develops software. Backend Engineer, GitLab Delivery - Operate An Overview Of This Role As a Backend Engineer, your work within the GitLab Operate team will focus on delivering and supporting GitLab for self-managed customers. This role centers on the infrastructure, tooling, and automation that power GitLab deployments via Omnibus, GitLab Helm Charts, the GitLab Environment Toolkit (GET), and the GitLab Operator. The GitLab Operate team serves as a critical bridge between GitLab engineering and our self-managed customers, ensuring our products are easily deployable, secure, and scalable across a range of environments. You’ll work on production-grade tooling and collaborate with engineering teams to ensure GitLab’s features are consistently delivered and operable across supported platforms. Some Interesting Links About The Team And Role Our primary projects Our work with the community Our Demo videos What You’ll Do Omnibus GitLab: Maintain and evolve the GitLab Omnibus package to ensure it reliably integrates all GitLab components and can be deployed in self-managed environments. Kubernetes Charts: Contribute to the development and maintenance of GitLab Helm Charts, enabling scalable and production-ready GitLab deployments on Kubernetes. GitLab Environment Toolkit (GET): Enhance and support the toolkit used to deploy validated GitLab reference architectures for enterprise and internal use cases. GitLab Operator: Support the GitLab Operator project to enable Kubernetes-native lifecycle management for GitLab deployments. Installation and Upgrade Experience: Ensure a consistent and reliable experience for installing, upgrading, and operating GitLab across all supported platforms. Security Collaboration: Partner with Security to address vulnerabilities in the deployment stack and ensure secure defaults and configurations. Automation & CI/CD: Build and maintain automation pipelines for validating and testing deployment tools across Omnibus, Charts, GET, and the Operator. Cross-Team Integration: Work closely with Distribution Engineers, SREs, Release Managers, and Development teams to ensure smooth integration of new features into our deployment methods. Documentation & Enablement: Create and maintain user-focused documentation that enables self-managed customers to confidently deploy and operate GitLab. Reliability: Ensure all supported deployment methods are well-tested and meet GitLab’s standards for quality, reliability, and performance. What You’ll Bring Production experience working with Kubernetes and Helm Professional proficiency in Ruby, Go and strong Bash scripting skills Experience with Terraform and infrastructure as code workflows Practical experience working with databases, especially PostgreSQL Understanding of secure, scalable, and supportable deployment practices Experience collaborating in large codebases and across distributed teams Ability to write clear, user-facing documentation and implementation guides Experience with major cloud providers (e.g., GCP, AWS, Azure) Knowledge of service scaling and rollout strategies Knowledge of Observability tools (Prometheus, Grafana, etc) About The Team The Operate team is part of GitLab Delivery and focuses on delivering GitLab to self-managed users through supported and validated tooling. This includes maintaining and evolving the GitLab Omnibus package, Helm Charts, GitLab Operator, and the GitLab Environment Toolkit (GET). We partner with SRE, Release, Security, and Development teams to ensure GitLab is easily deployable, supportable, and production-ready in diverse environments. Our work directly supports GitLab’s commitment to reliability and flexibility for small business through enterprise scale, self-managed installations. How GitLab Will Support You Benefits to support your health, finances, and well-being All remote, asynchronous work environment Flexible Paid Time Off Team Member Resource Groups Equity Compensation & Employee Stock Purchase Plan Growth and development budget Parental leave Home office support Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you're excited about this role, please apply and allow our recruiters to assess your application. Country Hiring Guidelines: GitLab hires new team members in countries around the world. All of our roles are remote, however some roles may carry specific location-based eligibility requirements. Our Talent Acquisition team can help answer any questions about location after starting the recruiting process. Privacy Policy: Please review our Recruitment Privacy Policy. Your privacy is important to us. GitLab is proud to be an equal opportunity workplace and is an affirmative action employer. GitLab’s policies and practices relating to recruitment, employment, career development and advancement, promotion, and retirement are based solely on merit, regardless of race, color, religion, ancestry, sex (including pregnancy, lactation, sexual orientation, gender identity, or gender expression), national origin, age, citizenship, marital status, mental or physical disability, genetic information (including family medical history), discharge status from the military, protected veteran status (which includes disabled veterans, recently separated veterans, active duty wartime or campaign badge veterans, and Armed Forces service medal veterans), or any other basis protected by law. GitLab will not tolerate discrimination or harassment based on any of these characteristics. See also GitLab’s EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know during the recruiting process.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Gurugram

Work from Office

Creditas Solutions is looking for DevOps Engineer to join our dynamic team and embark on a rewarding career journey Manage cloud infrastructure and deployments on Azure Automate CI/CD pipelines for software releases Monitor and optimize system performance Ensure security and compliance in cloud environments

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description: CloudOps Engineer Who we are: Acqueon's conversational engagement software lets customer-centric brands orchestrate campaigns and proactively engage with consumers using voice, messaging, and email channels. Acqueon leverages a rich data platform, statistical and predictive models, and intelligent workflows to let enterprises maximize the potential of every customer conversation. Acqueon is trusted by 200 clients across industries to increase sales, drive proactive service, improve collections, and develop loyalty. At our core, Acqueon is a customer-centric company with a burning desire (backed by a suite of awesome, AI-powered technology) to help businesses provide friction-free, delightful, and referral-worthy customer experiences. Position Overview We are seeking a highly skilled CloudOps Engineer with expertise in Amazon Web Services (AWS) to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining cloud infrastructure, SaaS Applications, ensuring high availability, scalability, and security. You will work collaboratively with development, operations, and security teams to automate deployment processes, optimize system performance, and drive operational excellence. As a Cloud Engineer in Acqueon you will need…. Ensure the highest uptime for customers in our SaaS environment Provision Customer Tenants & Manage SaaS Platform, Memos to the Staging and Production Environments Infrastructure Management: Design, deploy, and maintain secure and scalable AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, and CloudFormation. Monitoring & Incident Response: Set up monitoring solutions (e.g., CloudWatch, Grafana) to detect, respond, and resolve issues quickly, ensuring uptime and reliability. Cost Optimization: Continuously monitor cloud usage and implement cost-saving strategies such as Reserved Instances, Spot Instances, and resource rightsizing. Backup & Recovery: Implement robust backup and disaster recovery solutions using AWS tools like AWS Backup, S3, and RDS snapshots. Security Compliance: Configure security best practices, including IAM policies, security groups, and encryption, while adhering to organizational compliance standards. Infrastructure as Code (IaC): Use Terraform, CloudFormation, or AWS CDK to provision, update, and manage infrastructure in a consistent and repeatable manner. Automation & Configuration Management: Automate manual processes and system configurations using Ansible, Python, or shell scripting. Containerization & Orchestration: Manage containerized applications using Docker and Kubernetes (EKS) for scaling and efficient deployment. Skills & Qualifications: 2-5 years of experience in Cloud Operations, Infrastructure Management, or DevOps Engineering. Deep expertise in AWS services (EC2, S3, RDS, VPC, Lambda, IAM, CloudFormation, etc.). Strong experience with Terraform for infrastructure provisioning and automation. Proficiency in scripting with Python, Bash, or PowerShell for cloud automation. Hands-on experience with monitoring and logging tools (AWS CloudWatch, Prometheus, Datadog, ELK Stack, etc.). Strong understanding of networking concepts, security best practices, IAM policies, and role-based access control (RBAC). Experience troubleshooting SaaS application performance, system reliability, and cloud-based service disruptions. Familiarity with containerization technologies (Docker, Kubernetes, AWS ECS, or EKS). Willingness to work in a 24/7 operational environment with rotational shifts. Preferred Qualifications: AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer). Experience with hybrid cloud environments and on-premises-to-cloud migrations. Familiarity with other cloud platforms like Azure or GCP. Knowledge of database management (e.g., RDS, DynamoDB) and caching solutions (e.g., Redis, ElastiCache). This is an excellent opportunity for those seeking to continue to build upon their existing skills. The right individual will be self-motivated and a creative problem solver. You should possess the ability to seek out the correct information efficiently through individual efforts and with the team. By joining the Acqueon team, you can enjoy the benefits of working for one of the industry's fastest growing and highly respected technology companies. If you, or someone you know, would be a great fit for us we would love to hear from you today! Use the form to apply today or submit your resume.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities Design, build, and maintain scalable software development platforms. Lead technical discussions with clients and internal teams. Mentor junior engineers through pairing and collaborative sessions. Embrace dynamic roles and adapt to evolving tech stacks. Drive Agile/Lean practices across projects. Document solutions and promote knowledge sharing. Stay updated on emerging technologies and industry trends. Requirements Experience managing large-scale infrastructure systems. Proficiency in at least one programming language. Strong understanding of software delivery principles. Technical agility with the ability to adapt across stacks and tools. Excellent communication and collaboration skills. Technical Expertise Experience building infrastructure or business platforms (e. g., notification services). Strong knowledge of Linux, cloud, and software-defined networking. Hands-on with configuration management tools like Ansible, Chef. Deep understanding of CI/CD practices and tools likeJenkins, GitHub Actions, and GitLab CI. Experience with GitOps tools like ArgoCD, FluxCD is a plus. Familiarity with Cloud-Native tools (Prometheus, OpenTelemetry, Envoy). Experience with container orchestration (e. g., Kubernetes, Nomad). Proficient in infrastructure-as-code (Terraform, Pulumi, CloudFormation, AWS CDK). Exposure to at least one major cloud platform (AWS, GCP, or Azure). Understanding of observability and monitoring best practices. Knowledge of distributed systems and experience with SQL/NoSQL databases. Awareness of cloud security principles. This job was posted by Akanksha Sharma from Infraspec.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Join Inito's DevOps team, playing a crucial role in building, maintaining, and scaling our cloud infrastructure and operational excellence. This role offers a unique opportunity to contribute across development and operations, streamlining processes, enhancing system reliability, and strengthening our security posture. You will work closely with engineering, data science, and other cross-functional teams in a fast-paced, growth-oriented environment. Responsibilities Assist in managing and maintaining cloud infrastructure on AWS, GCP, and on-premise compute (including bare-metal servers). Support and improve CI/CD pipelines, contributing to automated deployment processes. Contribute to automation efforts through scripting, reducing manual toil, and improving efficiency. Monitor system health and logs, assisting in troubleshooting and resolving operational issues. Develop a deep understanding of application working, including memory & disk usage patterns, database interactions, and overall resource consumption to ensure performance and stability. Participate in incident response and post-mortem analysis, contributing to faster resolution and preventing recurrence. Support the implementation and adherence to cloud security best practices (e. g., IAM, network policies). Assist in maintaining and evolving Infrastructure as Code (IaC) solutions. Requirements Cloud Platforms: At least 2 years of hands-on experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP), including core compute, storage, networking, and database services (e. g., EC2 S3 VPC, RDS, GCE, GCS, Cloud SQL). On-Premise infrastructure: Setup, automation, and management. Operating Systems: Proficiency in Linux environments and shell scripting (Bash). Scripting/Programming: Foundational knowledge and practical experience with Python for automation. Containerization: Familiarity with Docker concepts and practical usage. Basic understanding of container orchestration concepts (e. g., Kubernetes). CI/CD: Understanding of Continuous Integration/Continuous Delivery principles and experience with at least one CI/CD tool (e. g., Jenkins, GitLab CI, CircleCI, GitHub Actions). Familiarity with build and release automation concepts. Version Control: Solid experience with Git for code management. Monitoring: Experience with basic monitoring and alerting tools (e. g., AWS CloudWatch, Grafana). Familiarity with log management concepts. Networking: Basic understanding of networking fundamentals (DNS, Load Balancers, VPCs). Infrastructure as Code (IaC): Basic understanding of Infrastructure as Code (IaC) principles. Good To Have Skills & Qualifications Cloud Platforms: Hands-on experience with both AWS and GCP. Hybrid & On-Premise Cloud Architectures: Hands-on experience with VMware vSphere / Oracle OCI or any on-premises infrastructure platform. Infrastructure as Code (IaC): Hands-on experience with Terraform or AWS CloudFormation. Container Orchestration: Hands-on experience with Kubernetes (EKS, GKE). Databases: Familiarity with PostgreSQL and Redis administration and optimization. Security Practices: Exposure to security practices like SAST/SCA or familiarity with IAM best practices beyond basics. Awareness of secrets management concepts (e. g., HashiCorp Vault, AWS Secrets Manager) and vulnerability management processes. Observability Stacks: Experience with centralized logging (e. g., ELK Stack, Loki) or distributed tracing (e. g., Jaeger, Zipkin, Tempo). Serverless: Familiarity with serverless technologies (e. g., AWS Lambda, Google Cloud Functions). On-call/Incident Management Tools: Familiarity with on-call rotation and incident management tools (e. g., PagerDuty). DevOps Culture: A strong passion for automation, continuous improvement, and knowledge sharing. Configuration Management: Experience with tools like Ansible for automating software provisioning, configuration management, and application deployment, especially in on-premise environments. Soft Skills Strong verbal and written communication skills, with an ability to collaborate effectively across technical and non-technical teams. Excellent problem-solving abilities and a proactive, inquisitive mindset. Eagerness to learn new technologies and adapt to evolving environments. Ability to work independently and contribute effectively as part of a cross-functional team. This job was posted by Ronald J from Inito.

Posted 1 week ago

Apply

5.0 - 7.0 years

14 - 17 Lacs

Pune

Work from Office

Critical Skills to Possess: Strong Experience Azure Cloud Should have working knowledge on Kubernetes, Jenkins and Terraform Should have working knowledge on Packer and Flux tools Should have good exposure on Linux and Windows administration Strong knowledge on YAML scripts Should have good exposure on Helm Knowledge in setting up pipelines in Jenkins Manage CI/CD automation and knowledge on Bitbucket/Jenkins integration Interpersonal communications skills, to interface with customers, peers and management Preferred Qualifications: Bachelor’s degree in computer science or a related field (or equivalent work experience) Roles and Responsibilities Roles and Responsibilities: Design, implement, and maintain the organization's continuous integration and delivery (CI/CD) pipelines to automate software build, test, and deployment processes. Collaborate with development teams to understand their requirements and provide technical guidance on building scalable and reliable infrastructure. Develop and maintain infrastructure as code (IaC) using tools like Ansible, Puppet, or Terraform to enable automated provisioning and configuration management. Manage and monitor cloud-based infrastructure (such as AWS, Azure, or Google Cloud) to ensure high availability, scalability, and performance of applications. Implement and maintain monitoring and logging systems to proactively identify and resolve performance bottlenecks and security vulnerabilities. Troubleshoot issues related to application deployment, performance, and reliability, working closely with development and operations teams to ensure timely resolution. Implement and enforce security best practices for infrastructure and applications, including access control, data encryption, and vulnerability scanning.

Posted 1 week ago

Apply

5.0 - 7.0 years

5 - 5 Lacs

Mumbai, Chennai, Gurugram

Work from Office

We are seeking a skilled Site Reliability Engineer to support the administration of Azure Kubernetes Service (AKS) clusters running critical, always-on middleware that processes thousands of transactions per second (TPS). The ideal candidate will operate with a mindset aligned to achieving 99.999% (five-nines) availability. Key Responsibilities: Own and manage AKS cluster deployments, cutovers, base image updates, and daily operational tasks. Test and implement Infrastructure as Code (IaC) changes using best practices. Apply software engineering principles to IT operations for maintaining scalable and reliable production environments. Write and maintain IaC as well as automation code for: Monitoring and ing Log analysis Disaster recovery testing Incident response Documentation-as-code Mandatory Skills: Strong experience with Terraform In-depth knowledge of Azure Cloud Proficiency in Kubernetes cluster creation and lifecycle management (deployment-only experience is not sufficient) Hands-on experience with CI/CD tools (GitHub Actions preferred) Bash and Python scripting skills Desirable Skills: Exposure to Azure Databricks and Azure Data Factory Experience with secret management using HashiCorp Vault Familiarity with monitoring tools (any) Required Skills Azure, Kubernetes, Terraform, DevOps

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : AWS Architecture Good to have skills : Python (Programming Language) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable, cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services, along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. - Design and implement enterprise-scale data lake and data warehouse solutions on AWS. - Lead the development of ELT/ETL pipelines using AWS Glue, EMR, Lambda, and Step Functions, with Python and PySpark. - Work closely with data engineers, analysts, and business stakeholders to define data architecture strategy. - Define and enforce data modeling, metadata, security, and governance best practices. - Create reusable architectural patterns and frameworks to streamline future development. - Provide architectural leadership for migrating legacy data systems to AWS. - Optimize performance, cost, and scalability of data processing workflows. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of programming languages such as Python or Java for data processing. - AWS Services: S3, Glue, Athena, Redshift, EMR, Lambda, IAM, Step Functions, CloudFormation or Terraform - Languages: Python ,PySpark .SQL - Big Data: Apache Spark, Hive, Delta Lake - Orchestration & DevOps: Airflow, Jenkins, Git, CI/CD pipelines - Security & Governance: AWS Lake Formation, Glue Catalog, encryption, RBAC - Visualization: Exposure to BI tools like QuickSight, Tableau, or Power BI is a plus Additional Information: - The candidate should have minimum 5 years of experience in AWS Architecture. - This position is based at our Pune office. - A 15 years full time education is required.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are looking for a passionate and skilled Full Stack Developer with strong experience in React.js , Node.js , and AWS Lambda to build a custom enterprise platform that interfaces with a suite of SDLC tools. This platform will streamline tool administration, automate provisioning and deprovisioning of access, manage licenses, and offer centralized dashboards for governance and monitoring. Required Skills & Qualifications: 4–6 years of hands-on experience as a Full Stack Developer Proficient in React.js and component-based front-end architecture Strong backend experience with Node.js and RESTful API development Solid experience with AWS Lambda , API Gateway , DynamoDB , S3 , etc. Prior experience integrating and automating workflows for SDLC tools like: JIRA , Jenkins , GitLab , Bitbucket , GitHub , SonarQube , etc. Understanding of OAuth2, SSO, and API key-based authentications Familiarity with CI/CD pipelines, microservices, and event-driven architectures Strong knowledge of Git and modern development practices Good problem-solving skills, and ability to work independently Nice to Have: Experience with Infrastructure-as-Code (e.g., Terraform, CloudFormation) Experience with AWS EventBridge, Step Functions, or other serverless orchestration tools Knowledge of enterprise-grade authentication (LDAP, SAML, Okta) Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog Job Roles and Responsibilities : Key Responsibilities: Design and develop intuitive front-end interfaces using React.js , ensuring seamless user experiences. Build robust backend services using Node.js and AWS Lambda , with integrations to external APIs (e.g., JIRA , Jenkins , GitLab , GitHub , SonarQube , etc.). Create secure, scalable REST APIs and event-driven services for tool license management and user access automation. Develop and integrate custom workflows for: License allocation & de-allocation Vendor resource onboarding Admin task automation (e.g., account creation, project config) Implement custom dashboards and reporting interfaces for usage, access, and compliance metrics. Collaborate with DevOps and Security teams to enforce secure API and cloud deployment practices. Write clean, maintainable code and participate in code reviews and design discussions. Troubleshoot issues and deliver fixes in a fast-paced enterprise environment. Document workflows, APIs, and architectural decisions.

Posted 1 week ago

Apply

162.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – We are seeking an experienced Backend Developer to be part of the development of high-scalable applications on AWS cloud-native architecture. The ideal candidate will be part of a high performing team with a strong background in Node.js, serverless programming, and Infrastructure as Code (IaC) using Terraform. You will be responsible for translating business requirements into robust technical solutions, ensuring high-quality code, and fostering a culture of technical excellence within the team. Job Title - Sr Technical Lead Location: All Birlasoft Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities: Lead the design, development, and implementation of highly scalable and resilient backend applications using Node.js, TypeScript, and Express.js . Architect and build serverless solutions on AWS, leveraging services like AWS Lambda, API Gateway , and other cloud-native technologies. Utilize Terraform extensively for defining , provisioning, and managing AWS infrastructure as code, ensuring repeatable and consistent deployments. Collaborate closely with product managers, solution architects, and other engineering teams to capture detailed requirements and translate them into actionable technical tasks. Identify and proactively resolve technical dependencies and roadblocks. Design and implement efficient data models and integrate with NoSQL databases, specifically DynamoDB , ensuring optimal performance and scalability. Implement secure authentication and authorization mechanisms, including Single Sign-On (SSO) and integration with Firebase for user management. Ensure adherence to security best practices, coding standards, and architectural guidelines throughout the development lifecycle. Experience in using unit testing and test-driven development (TDD) methodologies to ensure code quality, reliability, and maintainability. Conduct code reviews, provide constructive feedback, and mentor junior and mid-level developers to elevate the team's technical capabilities. Contribute to the continuous improvement of our development processes, tools, and best practices. Stay abreast of emerging technologies and industry trends, particularly in the AWS cloud and Node.js ecosystem, and evaluate their applicability to our projects. Required Technical Skills: Node.js & JavaScript​ :Expert-level proficiency in Node.js, JavaScript (ES6+), and TypeScrip t .Frameworks :Strong experience with Express.js for building robust APIs .Serverless Programming :In-depth knowledge and hands-on experience with AWS Lambda and serverless architecture .Experience with designing and developing microservices architectures .Knowledge of Terraform for deployment of Lambda functions .AWS Cloud Native :Extensive experience designing and implementing solutions leveraging various AWS services (e.g., API Gateway, S3, SQS, SNS, CloudWatch, IAM) .Databases :Strong integration experience with DynamoDB, including data modeling and query optimization .Authentication :Hands-on experience with Single Sign-On (SSO) implementation and Firebase integration .Testing :Solid understanding and practical experience with unit testing frameworks (e.g., Jest, Mocha) and test automation .Desired Skills & Experience :A Bachelor's or Master's degree in Computer Science, Engineering, or a closely related discipline .Experience with CI/CD pipelines for automated deployment of serverless applications .Familiarity with containerization technologies (e.g., Docker) is a plus .Strong understanding of security principles and best practices in cloud environments .

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The ideal candidate must have: - Strong IAM implementation experience in SailPoint Strong experience in Security mechanisms like OAUTH, JWT, SAML, Open Id Connect, LDAP Significant experience on configuring Okta Workforce Identity Cloud and especially on : > Authentication policy, > Workflows, > Integration with on-premise directories (AD & LDAP) through agent, Experience on managing or using CI/CD pipelines for deployment of new Okta configuration purpose with Gitlab CI & Okta terraform provider Implementation experience on IAM using Java Frameworks Spring / Spring boot. Technical skills in Java 8 and above, preferably, in Java 17 / 21. Experinece with API security of REST and SOAP webservices for implementation of IAM in API Gateway Exposure in application deployment on Cloud environment GCP / AWS. Nice To Have Strong experience Active Directory Hands on experience with API testing tool like POSTMAN, SoapUI, cURL etc.. Knowledge of tools like Jenkins/Gitlab CI, Terraform, Maven/Gradle, JIRA, confluence for automation and Continuous deployment of IAM features Understanding of Cloud IAM Features, preferably GCP Hands-on on Full stack development Ability to debug Front-end and Back-end for IAM implementation debugging

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

iiPay is a highly successful global payroll services business, providing fully managed payroll services to a wide range of international businesses. Our service is underpinned by our market leading global payments management system delivering outstanding client experience and service levels. iiPay is looking for a DevOps Engineer who wants to be part of this rapidly expanding business, to bring technical expertise to the design, building and maintenance of our core payroll service delivery platform. Role overview The DevOps Engineer will work within the product development team to provide application infrastructure design and guidance. The DevOps Engineer will design & maintain infrastructure and tooling of deployment pipelines, improve monitoring and performance, and ensure that the technical infrastructure, applications provide consistent service to meet SLAs. The ideal candidate will be a continuous learner, intellectually curious, and stay up to date on new technologies, industry trends, and DevOps practices that could lead to further innovation. The DevOps Engineer will work closely with the whole development team to define the platform architecture required to support our technology objectives. The DevOps Engineer will be instrumental in ensuring that we realize the vision : To deliver an innovative, state-of-the-art workflow driven global payroll solution to ensure accurate and timely payroll for clients and employees that is intuitive, easy to implement and easy to operate. Key Objectives and Responsibilities The successful candidate requires experience, skills, and a proven track record in the following areas: Extensive DevOps Experience Proven track record in designing, implementing, and managing CI/CD pipelines, infrastructure automation, and deployment processes. Security Expertise Strong understanding of security best practices including vulnerability management, secure coding principles, access controls, and compliance standards. Platform Scalability & Performance Ability to design and maintain scalable infrastructure that supports high availability and optimal performance under varying loads. Cross-Technology Integration Experience working with complex web-based SaaS platforms composed of multiple interrelated components developed in Java and C#. Key Technical Skills Proficient in PowerShell scripting for automation tasks. Hands-on experience with TeamCity for continuous integration. Skilled in using Octopus Deploy for automated deployment workflows. Nginx and general web server configuration skills System Reliability & Monitoring Implement robust monitoring, alerting systems, and incident response strategies to ensure platform stability. Infrastructure as Code (IaC) Proficiency with tools such as Terraform, Ansible, or CloudFormation for automated provisioning and configuration management. Cloud Platform Knowledge Hands-on experience with cloud providers like AWS, Azure or Google Cloud Platform for deploying and managing services. Collaboration Skills Work closely with development teams to streamline workflows between software engineering and operations ensuring smooth integration of new features. Problem Solving & Troubleshooting Strong analytical skills to diagnose issues across distributed systems involving Java/C# components efficiently. Qualifications Proven industry experience in this role is essential. What we are looking for in you The successful applicant will ideally have experience in payroll, financial or human capital management software development. They should have the ability to become a system expert and have experience of managing and prioritising workloads. They should have strong analytical and problem-solving skills, excellent communication abilities, both verbal and written, and possess a keen attention to detail. They will be required to work in a global environment, with clients that have an expectation of service excellence. iiPay is an equal opportunity employer that does not tolerate discrimination on any basis. We actively encourage applications from diverse backgrounds, perspectives, and skills. We are committed to providing an environment of inclusiveness and respect where everyone can excel. Please be aware that this role cannot be offered on a Contract basis and is offered only on a permanent, full-time basis

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Gurugram

Work from Office

DevOps Engineer Exp. 4-10 years upto 25 LPA Location: Gurgaon Shift: 1-10 PM shifts Key Skills: 1. Terraform 2. AWS (S3, EKS) 3. Kafka 4. Scripting experience (Python, Call / WATS APP Anisha # 636208823 /8522016118 /7780152057

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development and growth of our Tax Compliance product suite. In this role, you'll shape innovative digital solutions that simplify and automate tax filing, reconciliation, and compliance workflows for businesses of all sizes. You will join a fast-growing company where you'll work in a dynamic and competitive market, impacting how businesses meet their statutory obligations with speed, accuracy, and confidence. Responsibilities Lead a high-performing engineering team (or operate as a hands-on technical lead). Drive the design and development of scalable backend services using Python. Experience in Django, FastAPI, and Task Orchestration Systems. Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable deployments. Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset. Collaborate cross-functionally with product managers, designers, and compliance experts to deliver features that make tax compliance seamless for our users. Familiarity with containerization tools like Docker and orchestration with Kubernetes. Background in security, observability, or compliance automation. Requirements 5+ years of software engineering experience, with at least 2+ years in a leadership or principal-level role. Deep expertise in Python/Node.js, including API development, performance optimization, and testing. Experience in Event-driven architecture, Kafka/RabbitMQ-like. Strong experience with AWS services (e. g., ECS, Lambda, S3 RDS, CloudWatch). Solid understanding of Terraform for infrastructure as code. Proficiency with Jenkins or similar CI/CD tooling. Comfortable balancing technical leadership with hands-on coding and problem-solving. Strong communication skills and a collaborative mindset. This job was posted by Parvinder Singh from Masters India.

Posted 1 week ago

Apply

2.0 - 7.0 years

9 - 13 Lacs

Noida

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibility: As a FHIR Solutions Engineer, you will play a pivotal role in designing, developing, and implementing FHIR-based solutions. You will leverage your extensive experience and expertise in FHIR to enhance our clinical systems, ensuring seamless interoperability and data exchange. Day to day duties will include design and development of FHIR based clinical knowledge solutions, EMR integration, performing knowledge transfer, and participation in team agile process. Required Qualifications: Bachelor’s or master’s degree in computer science, Information Technology, or a related field 2+ years of experience in healthcare technology with a focus on FHIR based solutions 2+ years of Typescript/JavaScript experience 2+ years of experience in NPM/NodeJS 2+ years of experience in Bash Shell Scripting 2+ years of experience in GitHub/git 2+ years of intermediate level experience with Azure infrastructure and architecture 2+ years of experience in the healthcare industry 1.5+ years of FHIR Experience 1+ years of experience in GitHub Actions/Workflow 1.5+ years of experience in FHIR Specification and Operations Solid understanding of clinical data standards and interoperability Intermediate level proficiency with Terraform and cloud architecture Proven track record of successfully implementing FHIR-based solutions in large health care organizations Proven excellent problem-solving skills and attention to detail Proven solid communication and collaboration skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #Nic ##NJP Apply Internal Employee Application

Posted 1 week ago

Apply

9.0 - 14.0 years

13 - 18 Lacs

Noida

Work from Office

Primary Responsibilities: Design and implement cloud-based solutions across multiple cloud platforms (Azure, AWS, Google Cloud) using industry best practices Develop and maintain cloud architecture standards and guidelines Collaborate with IT teams and business partners to understand business requirements and translate them into cloud solutions Ensure the security, scalability, and reliability of cloud infrastructure Monitor and optimize cloud performance and costs Conduct regular cloud security assessments and audits Troubleshoot and resolve cloud solution operational issues Provide technical leadership and guidance to the cloud support team Stay up-to-date with the latest cloud technologies and trends Develop disaster recovery and business continuity plans for cloud infrastructure Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field 9+ years of experience in cloud architecture and cloud services Experience with cloud security, networking, and storage Experience identifying and remediating security vulnerabilities in cloud infrastructure Solid knowledge of cloud platforms such as AWS, Azure, or Google Cloud Proficiency in scripting and automation (Python, PowerShell, Bash, etc.) Proficiency in infrastructure as code (IaC) tools such as Ansible, Terraform, or CloudFormation Proven ability to manage multiple priorities and projects Proven excellent problem-solving and analytical skills Solid communication and collaboration skills Preferred Qualifications: Certifications such as AWS Certified Solutions Architect, Microsoft CertifiedAzure Solutions Architect, or Google Cloud Professional Cloud Architect Experience with cloud and data center observability tools (Splunk, Application Insights, Azure Monitor, CloudWatch, etc.) Experience with Agile project management methodologies Experience with containerization and orchestration tools such as Docker and Kubernetes Experience with traditional data center and enterprise technologies Understanding of multi-cloud and hybrid environments Knowledge of FinOps principles for cloud cost optimization Knowledge of DevOps and DevSecOps practices and tools (GitHub Actions, etc.) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone – of every race, gender, sexuality, age, location and income – deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes – an enterprise priority reflected in our mission. #Nic External Candidate Application Internal Employee Application

Posted 1 week ago

Apply

8.0 - 13.0 years

17 - 22 Lacs

Noida

Work from Office

Primary Responsibilities: Conduct thorough reviews of requirements and systems analysis documents to ensure completeness Participate in the creation and documentation of designs, adhering to established design patterns and standards Independently perform coding and testing tasks, while also assisting and mentoring other engineers as needed Adhere to coding standards and best practices Address and resolve defects promptly Promote and write high-quality code, facilitating automation processes as required Collaborate with the deployment lead and peer developers to complete various project deployment activities Ensure proper use of source control systems Deliver technical artifacts for each project Identify opportunities to fine-tune and optimize applications Mentor team members on standard tools, processes, automation, and general DevOps practices AI ExpectationsWill utilize AI tools and applications to perform tasks, make decisions and enhance their work and overall cloud migration velocity. Expected to use AI Powered software for data analysis, coding, and overall productivity Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree in computer science or a related field 8+ years of experience working with JavaScript, TypeScript, NodeJS and go lang 4+ years of experience working with Python and Terraform Experience working with Microservices architecture Extensive experience with CI/CD and DevOps initiatives Experience working in an Agile environment Development experience with SDKs and APIs from at least one cloud service provider AWS, GCP, Azure In-depth knowledge of professional software engineering and best practices for the full SDLC, including coding standards, source control management, build processes, and testing Expertise in building and maintaining platform-level services on top of Kubernetes Proficiency in provisioning various infrastructure services in AWS, Azure, and GCP Skilled in provisioning MongoDB Atlas resources using APIs Familiarity with the GitHub toolset, including GitHub Actions Proven excellent debugging and troubleshooting skills Proven exceptional communication and presentation skills Demonstrated positive attitude and self-motivation At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #NIC#NJP

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 20 Lacs

Bengaluru

Hybrid

Role & responsibilities Skills/JD: Primary skills: AWS-; GCP-6=; Cloud Networking- Cloud Security configuration Secondary skills: Devops (CICD pipeline development, Containers, Kubernates), Terraform 7+ years of overall experience in datacentre, cloud and network 5+ years of hands-on experience in AWS and GCP cloud. 3+ years of experience in Containers, Kubernetes and micro services 3+ years of experience in Terraform• 3+ years of experience in advance networking in public cloud Terraform certification preferred Cloud Engineering or Security Certification preferred Example AWS Certified Solutions Architect (professional), AWS Certified Advanced Networking Speciality, AWS Certified Security, GCP Cloud Architect, GCP Network Engineer, GCP Cloud Security Engineer or similar. Engage with multiple cloud and networking stakeholders, understand the requirements for complex enterprise cloud environment Provide cloud and network security expertise and guidance to the cloud programs including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Cloud Application Architecture subprograms. Collaborate with enterprise architects and SMEs to deliver complete security architecture solutions Lead Cloud network security initiatives with designs, patterns and develop/deliver scalable and security terraform modules Look for opportunities to automate the network security configurations and implementations Monitor and optimize the patterns and modules - Understanding of classical or cloud-native design patterns is required. Knowledge of security configuration management, container security, endpoint security and secrets management as they are applied to cloud applications. Knowledge of network architecture, proxy infrastructure, and programs to support network access and enablement. Experience with multiple Information Security domains, such as Infrastructure Vulnerability, Data Loss Prevention, End User Security, Network Security, Internet Security, Identity & Access Management, etc.

Posted 1 week ago

Apply

6.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Seeking a skilled Azure DevOps Engineer to join our technology team. The ideal candidate will have a strong foundation in system administration , DevOps methodologies , and IT infrastructure management , with a focus on automation, scalability, and operational excellence. Key Responsibilities: Manage and maintain enterprise IT infrastructure , including servers, networks, and cloud environments. Design and implement DevOps pipelines for continuous integration and deployment (CI/CD). Automate system tasks and workflows using scripting and configuration management tools. Monitor system performance, troubleshoot issues, and ensure high availability and reliability. Collaborate with development, QA, and operations teams to streamline deployment and release processes. Maintain system security , compliance , and backup strategies. Document system configurations, operational procedures, and incident resolutions. Required Skills & Qualifications: Bachelors degree in Information Technology , Computer Science , or a related field. 37 years of experience in DevOps , IT operations , or system administration . Proficiency in: Linux/Windows server administration CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps) Infrastructure as Code (e.g., Terraform, Ansible) Cloud platforms (e.g., AWS, Azure, GCP) Monitoring tools (e.g., Prometheus, Grafana, Nagios) Strong understanding of networking , security , and virtualization technologies . Excellent problem-solving and communication skills . Preferred Qualifications: Certifications in AWS , Azure , or DevOps tools . Experience with containerization (Docker, Kubernetes). Familiarity with ITIL practices and incident management systems . Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Puducherry, India

On-site

Job Description For Associate DevOps Engineer Job Title: Associate DevOps Engineer Company : Mydbops Location: Pondicherry About Us Mydbops is a leading provider of open-source database services, with over 8 years of excellence in managing MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. We are PCI DSS and ISO certified, reflecting our commitment to security and operational excellence. At Mydbops, we believe in building strong customer relationships and delivering top-notch solutions through a passionate and skilled team. Role Overview We are looking for a highly motivated fresher to join our team as an Associate DevOps Engineer . This is an exciting opportunity to begin your career in DevOps while working alongside experienced professionals in cloud infrastructure, automation, and modern deployment pipelines. Key Responsibilities Assist in managing and monitoring cloud infrastructure (especially AWS). Learn and contribute to automation tools like Terraform and Ansible . Support the team in deploying and maintaining CI/CD pipelines. Help automate repetitive operational tasks to enhance reliability and efficiency. Participate in writing scripts (using Shell, Python, or Go ) to support automation efforts. Support containerization and orchestration using Docker and Kubernetes . Collaborate with developers and database teams to troubleshoot infrastructure issues. Stay curious and eager to learn new tools, technologies, and best practices in DevOps. Required Skills & Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. Good understanding of Linux operating systems (Ubuntu, CentOS, or AWS Linux). Basic knowledge of cloud platforms (AWS preferred). Exposure to CI/CD tools like GitLab or GitHub Actions (academic/project experience is acceptable). Understanding of networking basics – TCP/IP, DNS, etc. Familiarity with at least one scripting language (Shell, Python, or Go). Eagerness to learn DevOps tools such as Terraform, Ansible, Docker, and Kubernetes. Strong problem-solving skills and attention to detail. Preferred (Good To Have) Internship or academic project experience in DevOps or Cloud. Basic understanding of monitoring tools (e.g., Prometheus, Grafana, ELK Stack). Certification in AWS/DevOps/Linux (not mandatory but a plus). What We Offer A structured training and mentorship program for career growth. A chance to work with a dynamic and innovative team on real-world projects. A collaborative and inclusive work culture. Opportunity to gain hands-on experience with the latest DevOps tools and practices. Job Details Job Type: Full-time opportunity Work time: General Shift Mode of Employment - Work from Office Experience - Freshers

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

We are seeking a hands-on and experienced Head of Software Development, based in India, to lead our engineering team in a remote (work-from-home)setup. The ideal candidate will have strong technical expertise in full-stack development, infrastructure, and mentoring, with a solid background in managing teams and delivering high-quality software on schedule. Responsibilities Lead and mentor a team of developers across frontend, backend, and DevOps. Drive architecture and development decisions for React/Next.js frontend and Node.js/Express backend. Oversee database design and performance using PostgreSQL. Manage and optimize AWS infrastructure, including ECS, EC2, RDS, S3, and other relevant services. Own the software development lifecycle from planning to release. Set and manage realistic development timelines and deliverables. Implement best practices in coding, testing, CI/CD, and security. Collaborate closely with product and design teams to translate business goals into scalable technical solutions. Conduct regular code reviews and provide guidance for improvement and growth. Requirements Proven experience in a technical leadership role, ideally as a lead or head of development. Strong hands-on skills in: Frontend: React, Next.js, Backend: Node.js, Express, Database: PostgreSQL, Cloud: AWS (ECS, EC2, RDS, S3, IAM, etc.) Experience with containerized applications (Docker, ECS, or EKS). Excellent understanding of system design, architecture, and DevOps practices. Strong organizational skills and ability to create and manage development timelines. Passion for mentoring and upskilling developers. Experience with infrastructure as code (e.g., Terraform, CloudFormation). Exposure to serverless architecture or microservices. Knowledge of testing frameworks and CI/CD tools (e.g., GitHub Actions, Jenkins). This job was posted by Pa Renie from Renie.

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Mumbai

Work from Office

Design, implement, and manage CI/CD pipelines using Azure DevOps Services. Automate infrastructure deployment using Infrastructure as Code (IaC) tools like ARM templates, Bicep, Terraform, or Ansible. Manage source control using Git (Azure Repos, GitHub) and establish branching and merging strategies. Administer and maintain Azure services: Virtual Machines, App Services, AKS, Azure Monitor, Azure Key Vault, Azure Storage, etc. Develop and maintain scripts for automation (PowerShell, Bash, Python). Ensure system stability and performance across both Windows Server and Linux (Ubuntu, CentOS/RHEL) environments. Work closely with development, security, and operations teams to support agile delivery. Monitor system performance, security compliance, and logs using tools like Azure Monitor, Log Analytics, and third-party tools (e.g., Splunk, Datadog). Implement security best practices, RBAC, and policy enforcement in Azure. Support containerization and orchestration using Docker and Kubernetes. Perform troubleshooting of DevOps toolchains and infrastructure issues, ensuring minimal downtime. Location- Mumbai,Lower parel/Currey road Minimum 5 years of experience in DevOps engineering roles. Strong experience with Azure DevOps (Pipelines, Boards, Repos, Artifacts). Proven expertise with CI/CD automation and DevOps lifecycle management. Solid understanding of Azure IaaS and PaaS components. Experience with Infrastructure as Code: ARM templates, Bicep, Terraform, Ansible. Proficient in PowerShell and Bash scripting. Experience with Docker, Kubernetes, and container registries (e.g., ACR). Knowledge of version control systems: Git, branching models, and release management. Familiarity with monitoring, logging, and alerting tools.

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions

Posted 1 week ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. Preferred Qualifications AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Maharashtra Job ID: A3009382

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies