Jobs
Interviews

219 Cloudformation Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Job Title : Azure Presales Engineer. About the Role : As a Cloud Presales Engineer specializing in Azure, you will play a critical role in our sales process by working closely with sales and technical teams to provide expert guidance and solutions for our clients. Leveraging your in-depth knowledge on Azure services, you will understand customer needs, design tailored cloud solutions, and drive the adoption of our cloud offerings. This position requires strong technical acumen, excellent communication skills, and a passion for cloud technologies. Key Responsibilities Solution Design and Architecture : Understand customer requirements and design effective cloud solutions using Azure services. Create architecture diagrams and detailed proposals tailored to customer needs. Collaborate with sales teams to define the scope of technical solutions and present them to customers. Technical Expertise And Consultation Act as a subject matter expert on AWS and Azure services, including EC2, S3, Lambda, RDS, VPC, IAM, CloudFormation, Azure Virtual Machines, Blob Storage, Functions, SQL Database, Virtual Network, Azure Active Directory, and ARM Templates. Provide technical support during the sales process, including product demonstrations, POCs (Proof of Concepts), and answering customer queries. Advise customers on best practices for cloud adoption, migration, and optimization. Customer Engagement Build and maintain strong relationships with customers, understanding their business challenges and technical needs. Conduct customer workshops, webinars, and training sessions to educate customers on Azure solutions and services. Gather customer feedback and insights to help shape product and service offerings. Sales Support Partner with sales teams to develop sales strategies and drive cloud adoption. Prepare and deliver compelling presentations, demonstrations, and product pitches to customers. Assist in the preparation of RFPs, RFQs, and other customer documentation. Continuous Learning And Development Stay up-to-date with the latest AWS and Azure services, technologies, and industry trends. Achieve and maintain relevant AWS and Azure certifications to demonstrate expertise. Share knowledge and best practices with internal teams to enhance overall capabilities. Required Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Experience in a presales or technical consulting role, with a focus on cloud solutions. In-depth knowledge of AWS and Azure services, with hands-on experience in designing and implementing cloud-based architectures. Azure certifications (i.e. Microsoft Certified : Azure Solutions Architect Expert) are highly preferred. Strong understanding of cloud computing concepts, including IaaS, PaaS, SaaS, and hybrid cloud models. Excellent presentation, communication, and interpersonal skills. Ability to work independently and collaboratively in a fast-paced, dynamic environment. Preferred Qualifications Experience with other cloud platforms (i.e., Google Cloud) is a plus. Familiarity with DevOps practices, CI/CD pipelines, and infrastructure as code (IaC) using Terraform, CloudFormation, and ARM Templates. Experience with cloud security, compliance, and governance best practices. Background in software development, scripting, or system administration. Join us to be part of an innovative team, shaping cloud solutions and driving digital transformation for our clients!. (ref:hirist.tech),

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Location: Pune (Hybrid) Experience: 5+ years Key Responsibilities: Data Pipeline Architecture: Build and optimize large-scale data ingestion pipelines from multiple sources. Scalability & Performance: Ensure low-latency, high-throughput data processing for real-time and batch workloads. Cloud Infrastructure: Design and implement cost-effective, scalable data storage solutions. Automation & Monitoring: Implement observability tools for pipeline health, error handling, and performance tracking. Security & Compliance: Ensure data encryption, access control, and regulatory compliance in the data platform. Ideal Candidate Profile: Strong experience in Snowflake, dbt, and AWS for large-scale data processing. Expertise in Python, Airflow, and Spark for orchestrating pipelines. Deep understanding of data architecture principles for real-time and batch workloads. Hands-on experience with Kafka, Kinesis, or similar streaming technologies. Ability to work on cloud cost optimizations and infrastructure-as-code (Terraform, CloudFormation).,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We are looking for individuals who are risk-takers, collaborators, inspired, and inspirational. We seek those who are courageous enough to work on the cutting edge and develop solutions that will enhance and enrich the lives of people globally. If you aspire to make a difference that wows the world, we are eager to have a conversation with you. If you believe this role aligns with your ambitions and skill set, we invite you to begin the application process. Explore our other available positions as well, as our numerous opportunities can pave the way for endless possibilities. With 4 to 8 years of experience, the ideal candidate should possess the following primary skills: - Proficiency in Server Side (Java) & AWS serverless framework. - Hands-on experience with serverless framework is a must. - Design knowledge and experience in cloud-based web applications. Familiarity with software design representation tools like astah, visio, etc. - Strong experience in AWS, including but not limited to EC2 Volume, EC2 Security Group, EC2 AMI, Lambda, S3, AWSbackup, CloudWatch, CloudFormation, CloudTrail, IAM, SecretsManager, StepFunction, CostExplorer, KMS, VPC/Subnet. - Ability to understand business requirements concerning UI/UX. - Work experience on development/staging/production servers. - Proficient in testing and verification, knowledge of SSL certificates, and encryption. - Familiarity with Docker containerization. In addition to technical skills, soft skills are also crucial, including: - Excellent interpersonal, oral, and written communication skills. - Strong analytical and problem-solving abilities. - Capability to comprehend and analyze customer requirements and expectations. - Experience in interacting with customers. - Previous work with international cross-culture teams is a plus. Secondary Skills include: - Scripting using Python. - Knowledge of identity management is advantageous. - Understanding of UI/UX, ReactJS/typescript/bootstrap. - Proficiency in business use cases concerning UI/UX. - Troubleshooting issues related to integration on the cloud (front end/back end/system/services APIs).,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer / Site Reliability Engineer (SRE) at our client's new healthcare company, you will play a crucial role in ensuring the reliability, scalability, and performance of infrastructure and applications. Your responsibilities will include designing and implementing CI/CD pipelines, managing cloud infrastructure, building monitoring and alerting systems, collaborating with development teams, and ensuring system security and compliance. You will have the opportunity to work with dynamic teams to pioneer game-changing innovations at the intersection of health, material, and data science. By leveraging your expertise in cloud platforms, containerization tools, Infrastructure as Code (IaC), CI/CD tools, monitoring and logging tools, scripting languages, networking, and security protocols, you will contribute to the betterment of patients" lives and the optimization of healthcare professionals" workflows. To succeed in this role, you should possess a Bachelor's Degree or higher in Computer Science, Engineering, or a related field, along with at least 3 years of experience in a DevOps or Site Reliability Engineering role in a cloud environment. Strong proficiency in cloud platforms like AWS, Azure, Google Cloud, containerization tools such as Docker and Kubernetes, IaC tools like Terraform, CloudFormation, or Ansible, CI/CD tools, version control systems, monitoring and logging tools, and scripting languages is essential. Additionally, familiarity with Agile methodologies, DevSecOps practices, automated testing, problem-solving skills for troubleshooting complex production issues, and cost optimization practices in cloud environments will further enhance your success in this role. By providing technical guidance to team members and stakeholders, you will contribute to the continuous improvement of system reliability, automation, and scalability. In summary, as a DevOps Engineer / Site Reliability Engineer (SRE) at our client's new healthcare company, you will have the opportunity to make a significant impact by ensuring the seamless operation of critical healthcare infrastructure and applications through innovative solutions and best practices.,

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

The company Armada is an edge computing startup that specializes in providing computing infrastructure to remote areas with limited connectivity and cloud infrastructure. They also focus on processing data locally for real-time analytics and AI at the edge. Armada is dedicated to bridging the digital divide by deploying advanced technology infrastructure rapidly. As they continue to grow, they are seeking talented individuals to join them in achieving their mission. As a DevOps Lead at Armada, you will play a crucial role in integrating AI-driven operations into the DevOps practices of the company. Your responsibilities will include leading a DevOps team, designing scalable systems, and implementing intelligent monitoring, alerting, and self-healing infrastructure. The role requires a strategic mindset and hands-on experience with a focus on Ops AI. This position is based at the Armada office in Trivandrum, Kerala. As the DevOps Lead, you will lead the DevOps strategy with a strong emphasis on AI-enabled operational efficiency. You will architect and implement CI/CD pipelines integrated with machine learning models and analytics. Additionally, you will develop and manage infrastructure as code using tools like Terraform, Ansible, or CloudFormation. Collaboration is key in this role, as you will work closely with data scientists, developers, and operations teams to deploy and manage AI-powered applications. You will also be responsible for enhancing system observability through intelligent dashboards and real-time metrics analysis. Furthermore, you will mentor DevOps engineers and promote best practices in automation, security, and performance. To be successful in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You should also have at least 7 years of DevOps experience with a minimum of 2 years in a leadership role. Proficiency in cloud infrastructure management and automation is essential, along with experience in AIOps platforms and tools. Strong scripting abilities, familiarity with CI/CD tools, and expertise in containerization and orchestration are also required. Preferred qualifications include knowledge of MLOps, experience with serverless architectures, and certification in cloud platforms. Demonstrable experience in building and integrating software and hardware for autonomous or robotic systems is a plus. Strong analytical skills, time-management abilities, and effective communication are highly valued for this role. In return, Armada offers a competitive base salary along with equity options for India-based candidates. If you are a proactive individual with a growth mindset, strong problem-solving skills, and the ability to thrive in a fast-paced environment, you may be a great fit for this position at Armada. Join the team and contribute to the success and growth of the company while working collaboratively towards achieving common goals.,

Posted 3 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

Chennai, Bengaluru

Hybrid

Company Overview: Fujitsu Research of India Private Limited (FRIPL) is a cutting-edge research center established by Fujitsu in India. FRIPL focuses on AI, machine learning, and quantum software development. Located in India, a hub of rapid IT growth, FRIPL collaborates with top Indian research institutes like the Indian Institute of Technology Hyderabad (IIT Hyderabad) and the Indian Institute of Science (IISc). Their joint mission is to address global challenges through innovative technologies and contribute to sustainability. Network Group, in FRIPL, has established itself as a trusted partner in the telecommunications industry, driving advancements in networking technology and delivering cutting-edge solutions to address the evolving needs of customers worldwide. Network Group is dedicated to empowering organizations to build and manage advanced networks that drive digital innovation, enhance competitiveness, and deliver exceptional user experiences. Through its commitment to excellence, innovation, and customer success, the network group continues to shape the future of networking and drive positive outcomes for its customers and partners. Job Title: DevOps Engineer Summary/Objective: We are seeking a highly skilled DevOps Engineer to join our team and play a pivotal role in building and maintaining our robust and efficient infrastructure. You will be part of the Innovation Group that is responsible for automating and streamlining our development and deployment processes, ensuring high-quality software delivery, and optimizing our infrastructure for performance and scalability. Responsibilities: Design, implement, and maintain continuous integration and continuous delivery (CI/CD) pipelines using industry-standard tools and technologies. Develop and manage infrastructure as code (IaC) solutions using tools like Terraform, Ansible, or CloudFormation. Collaborate with development teams to integrate CI/CD pipelines into their workflows and ensure smooth integration. Implement and manage containerization solutions using Docker and Kubernetes for efficient application deployment and scaling. Monitor and analyze system performance, identify bottlenecks, and implement solutions to optimize infrastructure efficiency. Develop and maintain automation scripts for infrastructure management, deployment, and monitoring tasks. Implement and manage security best practices within the DevOps ecosystem to ensure data security and compliance. Stay abreast of emerging technologies and trends in the DevOps space and identify opportunities for improvement. Mentor and guide junior DevOps engineers, fostering knowledge sharing and team growth. Requirements : Bachelors degree in computer science, Engineering, or a related field. 3 to 8 years of experience in a DevOps role, with a proven track record of successfully implementing CI/CD pipelines and managing complex infrastructure. Strong understanding of DevOps principles and methodologies, including Agile development practices. Expertise in Jenkins & Groovy Pipelines Expertise in Python, Bash & restAPI development Experience with containerization technologies like Docker and Kubernetes. Proficiency in infrastructure as code (IaC) tools like Terraform, Ansible, or CloudFormation. Experience with build automation tools like Apache Maven, ANT Experience with cloud platforms like AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a focus on automation and efficiency. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Bonus Points: Experience with data analytics and AI tools and technologies. Experience with embedded software development. Certifications in relevant DevOps technologies.

Posted 4 weeks ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Noida

Work from Office

Job Summary: We are seeking a highly skilled and experienced DevOps Architect / Senior DevOps Engineer with 10+ years of expertise in designing, implementing, and managing robust DevOps ecosystems across AWS , Azure , and GCP . The ideal candidate will possess a deep understanding of cloud infrastructure, automation, CI/CD pipelines, container orchestration, and infrastructure as code. This role is both strategic and hands-ondriving innovation, scalability, and operational excellence in cloud-native environments. Key Responsibilities: Architect and manage DevOps solutions across multi-cloud platforms (AWS, Azure, GCP) . Build and optimize CI/CD pipelines and release management processes. Define and enforce cloud-native best practices for scalability, reliability, and security. Design and implement Infrastructure as Code (IaC) using tools like Terraform , Ansible , CloudFormation , or ARM templates . Deploy and manage containerized applications using Docker and Kubernetes . Implement monitoring, logging, and alerting frameworks (e.g., ELK, Prometheus, Grafana, CloudWatch). Drive automation initiatives and eliminate manual processes across environments. Collaborate with development, QA, and operations teams to integrate DevOps culture and workflows. Lead cloud migration and modernization projects. Ensure compliance, cost optimization, and governance across environments. Required Skills & Qualifications: 10+years of experience in DevOps / Cloud / Infrastructure / SRE roles. Strong expertise in at least two major cloud platforms ( AWS , Azure , GCP ) with working knowledge of the third. Advanced knowledge of Docker , Kubernetes , and container orchestration. Deep understanding of CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps, ArgoCD). Hands-on experience with IaC tools : Terraform, Ansible, Pulumi, etc. Proficiency in scripting languages like Python , Shell , or Go . Strong background in networking , cloud security , and cost optimization . Experience with DevSecOps and integrating security into DevOps practices. Bachelor's/Master's degree in Computer Science, Engineering, or related field. Relevant certifications preferred (e.g., AWS DevOps Engineer, Azure DevOps Expert, Google Professional DevOps Engineer). Preferred Skills: Multi-cloud or hybrid cloud experience. Exposure to service mesh , API gateways , and serverless architectures . Familiarity with GitOps , policy-as-code , and site reliability engineering (SRE) principles. Experience in high-availability, disaster recovery, and compliance (SOC2, ISO, etc.). Agile/Scrum or SAFe experience in enterprise environments.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune

Work from Office

Role Overview: Synechron is looking for a proactive and experienced SRE DevOps Engineer to join our infrastructure and operations team in Pune. This role is critical to maintaining system reliability, scalability, and deployment efficiency in a modern cloud-based environment. Key Responsibilities: Design, implement, and manage robust CI/CD pipelines Ensure high availability and performance of infrastructure and services Automate infrastructure provisioning, monitoring, and recovery Collaborate with development and QA teams to optimize deployment workflows Monitor system performance and proactively identify areas for improvement Implement SRE best practices including SLAs, SLOs, and error budgets Ensure secure and scalable infrastructure across multi-cloud environments Preferred Qualifications: Experience with Docker, Kubernetes, Jenkins, GitLab CI/CD Hands-on with observability tools like Prometheus, Grafana, ELK/EFK stack Strong understanding of networking, security, and cloud-native architecture Certifications in AWS/ GCP/DevOps (preferred but not mandatory) Education: Bachelors or Masters degree in Computer Science, IT, or a related field. Mandatory Skills: Site Reliability Engineering (SRE) practices DevOps tools & methodologies Cloud platforms: AWS and/or GCP CI/CD pipeline setup and management Proficiency in any coding/scripting language (Python, Shell, Bash, etc.) Infrastructure as Code (IaC): Terraform, CloudFormation (preferred).

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Role Overview: Synechron is seeking an experienced and motivated SRE DevOps Engineer to join our high-performing technology team in Bangalore. In this role, you'll be responsible for enhancing system reliability, managing cloud infrastructure, and streamlining deployment processes. Responsibilities: Design and manage scalable and reliable cloud-based infrastructure Develop and maintain robust CI/CD pipelines Automate infrastructure provisioning and monitoring Implement SRE practices like SLOs, SLIs, and error budgeting Collaborate with development, QA, and security teams to ensure end-to-end system integrity Proactively monitor and respond to infrastructure incidents and issues Continuously optimize performance, cost, and security across cloud environments Preferred Qualifications: Experience with containerization (Docker) and orchestration (Kubernetes) Familiarity with observability tools: Prometheus, Grafana, ELK stack Strong understanding of networking, cloud security, and fault-tolerant architecture Relevant certifications (AWS/ GCP/ DevOps) are a plus Education: Bachelors or Masters in Computer Science, Engineering, or a related field.

Posted 1 month ago

Apply

4.0 - 6.0 years

10 - 20 Lacs

Pune

Work from Office

Role Overview We are looking for experienced DevOps Engineers (4+ years) with a strong background in cloud infrastructure, automation, and CI/CD processes. The ideal candidate will have hands-on experience in building, deploying, and maintaining cloud solutions using Infrastructure-as-Code (IaC) best practices. The role requires expertise in containerization, cloud security, networking, and monitoring tools to optimize and scale enterprise-level applications. Key Responsibilities Design, implement, and manage cloud infrastructure solutions on AWS, Azure, or GCP. Develop and maintain Infrastructure-as-Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and manage CI/CD pipelines using tools like GitHub Actions, Jenkins, GitLab CI/CD, BitBucket Pipelines, or AWS CodePipeline. Manage and orchestrate containers using Kubernetes, OpenShift, AWS EKS, AWS ECS, and Docker. Work on cloud migrations, helping organizations transition from on-premises data centers to cloud-based infrastructure. Ensure system security and compliance with industry standards such as SOC 2, PCI, HIPAA, GDPR, and HITRUST. Set up and optimize monitoring, logging, and alerting using tools like Datadog, Dynatrace, AWS CloudWatch, Prometheus, ELK, or Splunk. Automate deployment, configuration, and management of cloud-native applications using Ansible, Chef, Puppet, or similar configuration management tools. Troubleshoot complex networking, Linux/Windows server issues, and cloud-related performance bottlenecks. Collaborate with development, security, and operations teams to streamline the DevSecOps process. Must-Have Skills 3+ years of experience in DevOps, cloud infrastructure, or platform engineering. Expertise in at least one major cloud provider: AWS, Azure, or GCP. Strong experience with Kubernetes, ECS, OpenShift, and container orchestration technologies. Hands-on experience in Infrastructure-as-Code (IaC) using Terraform, AWS CloudFormation, or similar tools. Proficiency in scripting/programming languages like Python, Bash, or PowerShell for automation. Strong knowledge of CI/CD tools such as Jenkins, GitHub Actions, GitLab CI/CD, or BitBucket Pipelines. Experience with Linux operating systems (RHEL, SUSE, Ubuntu, Amazon Linux) and Windows Server administration. Expertise in networking (VPCs, Subnets, Load Balancing, Security Groups, Firewalls). Experience in log management and monitoring tools like Datadog, CloudWatch, Prometheus, ELK, Dynatrace. Strong communication skills to work with cross-functional teams and external customers. Knowledge of Cloud Security best practices, including IAM, WAF, GuardDuty, CVE scanning, vulnerability management. Good-to-Have Skills Knowledge of cloud-native security solutions (AWS Security Hub, Azure Security Center, Google Security Command Center). Experience in compliance frameworks (SOC 2, PCI, HIPAA, GDPR, HITRUST). Exposure to Windows Server administration alongside Linux environments. Familiarity with centralized logging solutions (Splunk, Fluentd, AWS OpenSearch). GitOps experience with tools like ArgoCD or Flux. Background in penetration testing, intrusion detection, and vulnerability scanning. Experience in cost optimization strategies for cloud infrastructure. Passion for mentoring teams and sharing DevOps best practices.

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins, GitHub Actions) Collaborate with developers and data engineers to enforce schema versioning, rollback strategies, and data contract enforcement. Preferred candidate profile 5+ years of experience in DataOps, DevOps, or data infrastructure roles. Proven experience with infrastructure-as-code (e.g., Terraform, CloudFormation). Proven experience with real-time data streaming platforms (e.g., Kinesis, Kafka). Proven experience building production-grade data pipelines and monitoring systems in AWS . Hands-on experience with tools like AWS Glue , S3 , Lambda , Athena , and CloudWatch . Strong knowledge of Python and scripting for automation and orchestration. Familiarity with data validation frameworks such as Great Expectations, Deequ, or dbt tests. Experience with SQL-based data systems (e.g., PostgreSQL). Understanding of security, IAM, and compliance best practices in cloud data environments.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

As a Senior Software Engineer I, you will be a critical member of our technology team, responsible for designing, developing, and deploying scalable software solutions. You will leverage your expertise in Java, ReactJS, AWS, and emerging AI tools to deliver innovative products and services that enhance healthcare outcomes and streamline operations. Primary Responsibilities: Design, develop, test, deploy, and maintain full-stack software solutions leveraging Java, ReactJS, and AWS cloud services Collaborate closely with cross-functional teams, including Product Managers, Designers, Data Scientists, and DevOps Engineers, to translate business requirements into technical solutions Implement responsive UI/UX designs using ReactJS, ensuring optimal performance and scalability Develop robust backend services and APIs using Java and related frameworks (e.g., Spring Boot) Leverage AWS cloud services (e.g., EC2, , S3, Postgres /DynamoDB, ECS, EKS, CloudFormation) to build scalable, secure, and highly available solutions Incorporate AI/ML tools and APIs (such as OpenAI, Claude, Gemini, Amazon AI services) into existing and new solutions to enhance product capabilities Conduct code reviews and adhere to software engineering best practices to ensure quality, security, maintainability, and performance Actively participate in agile methodologies, sprint planning, backlog grooming, retrospectives, and continuous improvement processes Troubleshoot, debug, and resolve complex technical issues and identify root causes to ensure system reliability and performance Document technical solutions, system designs, and code effectively for knowledge sharing and future reference Mentor junior team members, fostering technical growth and engineering excellence Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors Degree or higher in Computer Science, Software Engineering, or related technical discipline 6+ years of hands-on software development experience across the full stack Solid experience developing front-end applications using ReactJS, TypeScript / JavaScript, HTML5, CSS3 Familiarity with AI/ML tools and APIs (such as OpenAI, Claude, Gemini, AWS AI/ML services) and experience integrating them into software solutions Experience with relational and NoSQL databases, along with solid SQL skills Experience in agile development methodologies and CI/CD pipelines Monitoring tools experience like Splunk, Datadog, Dynatrace Solid analytical and problem-solving skills, with the ability to troubleshoot complex technical issues independently Solid proficiency in Java, J2EE, Spring/Spring Boot, and RESTful API design Demonstrable experience deploying and managing applications on AWS (e.g., EC2, S3, Postgres /DynamoDB, RDS, ECS, EKS, CloudFormation) Proven excellent written, verbal communication, and interpersonal skills Preferred Qualifications: Experience in healthcare domain and understanding of healthcare data and workflows Hands-on experience with containerization technologies (Docker, Kubernetes) Experience with performance optimization, monitoring, and logging tools Familiarity with DevOps practices, Infrastructure as Code, and tools like Jenkins, Terraform, Git, and GitHub Actions Exposure to modern architectural patterns such as microservices, serverless computing, and event-driven architecture.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Chennai

Remote

Location: 100% Remote Employment Type: Full-Time Must have Own laptop and Internet connection Work hours: 11 AM to 8 PM IST Position Summary: We are looking for a highly skilled and self-driven Full Stack Developer with deep expertise in React.js, Node.js, and AWS cloud services. The ideal candidate will play a critical role in designing, developing, and deploying full-stack web applications in a secure and scalable cloud environment. Key Responsibilities: Design and develop scalable front-end applications using React.js and modern JavaScript/TypeScript frameworks. Build and maintain robust backend services using Node.js, Express, and RESTful APIs. Architect and deploy full-stack solutions on AWS using services such as Lambda, API Gateway, ECS, RDS, S3, CloudFormation, CloudWatch, and DynamoDB. Ensure application performance, security, scalability, and maintainability. Work collaboratively in Agile/Scrum environments and participate in sprint planning, code reviews, and daily standups. Integrate CI/CD pipelines and automate testing and deployment workflows using AWS-native tools or services like Jenkins, CodeBuild, or GitHub Actions. Troubleshoot production issues, optimize system performance, and implement monitoring and alerting solutions. Maintain clean, well-documented, and reusable code and technical documentation. Required Qualifications: 5+ years of professional experience as a full stack developer. Strong expertise in React.js (Hooks, Context, Redux, etc.). Advanced backend development experience with Node.js and related frameworks. Proven hands-on experience designing and deploying applications on AWS Cloud. Solid understanding of RESTful services, microservices architecture, and cloud-native design. Experience working with relational databases (PostgreSQL, MySQL, DynamoDB). Proficient in Git and modern DevOps practices (CI/CD, Infrastructure as Code, etc.). Strong communication skills and ability to collaborate in distributed teams.

Posted 1 month ago

Apply

5.0 - 9.0 years

12 - 17 Lacs

Pune, Chennai, Jaipur

Work from Office

We are hiring an experienced Python Developer for a contractual role with a leading global digital services client via Awign Expert. The role requires hands-on development experience with AWS, PySpark, Lambdas, CloudWatch, SNS, SQS, and CloudFormation. The developer will work on real-time data integrations using various data formats and will manage streaming data via Kinesis and implement autoscaling strategies. The ideal candidate is a strong individual contributor, a collaborative team player, and possesses excellent problem-solving and communication skills. This is a high-impact opportunity for someone passionate about cloud-native Python applications and scalable architectures. Location: Bengaluru,Hyderabad,Chandigarh,Indore,Nagpur,Gurugram,Jaipur,Chennai,Pune,Mangalore

Posted 1 month ago

Apply

6.0 - 7.0 years

11 - 14 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.

Posted 1 month ago

Apply

5.0 - 7.0 years

14 - 16 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)

Posted 1 month ago

Apply

6.0 - 10.0 years

35 - 40 Lacs

Pune

Work from Office

Requirements : Lead Scalable AI Platform Architecture for Healthcare Transformation Join a pioneering SaaS leader using AI to revolutionize US healthcare data. As a hands-on Technical Architect, you will design, optimize, and roll out our core platforms while driving key development. Demand deep expertise in Java, Python, Spring Boot, ETL (Airflow), and messaging (SQS/RabbitMQ). Mastery of AWS, microservices, and DevOps automation is essential. Own technical debt, cost optimization, and build reusable tools that elevate the entire engineering org. Architect the future while staying in the code. Key Responsibilities : 1. Technical Leadership & Execution : - Architect and build scalable, high-performance systems, leading critical design decisions and hands-on coding (Java full-stack). - Own end-to-end deliveryfrom prototyping to productionensuring robustness, security, and maintainability. - Optimize engineering processes (CI/CD, testing, observability) to accelerate development without sacrificing quality. 2. Team Development & Mentorship : - Coach and elevate senior engineers, fostering a culture of ownership, innovation, and operational rigor. - Set technical standards through code reviews, design sessions, and best practices in distributed systems. 3. Strategic Impact : - Align engineering with business goals, working closely with Product and AI teams to prioritize high-impact initiatives. - Anticipate technical debt and scalability risks, proactively driving improvements. - Evaluate emerging tech (tools, frameworks, AI/ML advancements) to maintain a competitive edge. Desired Profile : - Seeking high-caliber engineers from premier tech/product companies with strong technical foundations. - 5-8 years architecting systems at scale in fast-paced tech environments - Deep expertise : Java/Python, Spring Boot, Microservices, SQL/NoSQL (with performance mastery) - Proven AWS proficiency : EC2/S3/Lambda/RDS + IaC (Terraform/CloudFormation) - Systems thinker : High/low-level design for scalability, event-driven architecture (SQS/RabbitMQ) - DevOps-native : CI/CD, Docker/K8s, infrastructure automation

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

Cloud Infra and Devops Lead - J49135 Deep understanding of cloud platforms (AWS, Azure) and cloud-native services. Expertise in CI/CD tools (Jenkins, GitLab CI, Azure DevOps, etc.). Hands-on with Infrastructure as Code tools like Terraform. Biceps CloudFormation, ARM templates would be added advantage Knowledge in Kubernetes, Docker, and container orchestration. Strong understanding of networking, security, monitoring, and logging tools. Familiarity with automation tools like Ansible, Chef, or Puppe" Qualification - BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MCA

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad

Work from Office

Cloud Infra and Devops Lead - J49135 Deep understanding of cloud platforms (AWS, Azure) and cloud-native services. Expertise in CI/CD tools (Jenkins, GitLab CI, Azure DevOps, etc.). Hands-on with Infrastructure as Code tools like Terraform. Biceps CloudFormation, ARM templates would be added advantage Knowledge in Kubernetes, Docker, and container orchestration. Strong understanding of networking, security, monitoring, and logging tools. Familiarity with automation tools like Ansible, Chef, or Puppe" Qualification - BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MCA

Posted 1 month ago

Apply

10.0 - 20.0 years

35 - 50 Lacs

Thane, Pune, Mumbai (All Areas)

Work from Office

We’re hiring a DevOps Head (10+ yrs exp) for our client in Mumbai/Pune. Hybrid role. Must have AWS/Azure, CI/CD, Terraform, Kubernetes & leadership experience. Share CV + CTC/NP/Location details. Apply now if you’re ready to lead at scale!

Posted 1 month ago

Apply

4.0 - 6.0 years

7 - 9 Lacs

Pune

Work from Office

Managing stakeholders and external interfaces, will be responsible for the smooth operation of a company's IT infrastructure, must have a deep understanding of both development and operations processes, as well as a strong technical background

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 15 Lacs

Hyderabad, Secunderabad

Work from Office

Hands-on experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Knowledge of Terraform, CloudFormation, or other infrastructure automation tools. Experience with Docker, and basic knowledge of Kubernetes. Familiarity with monitoring/logging tools such as CloudWatch, Prometheus, Grafana, ELK.

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 5 Lacs

Bengaluru

Work from Office

Responsibilities Design and implement cloud-based infrastructure (AWS, Azure, or GCP) Develop and maintain CI/CD pipelines to ensure smooth deployment and delivery processes Manage containerized environments (Docker, Kubernetes) and infrastructure-as-code (Terraform, Ansible) Monitor system health, performance, and security; respond to incidents and implement fixes Collaborate with development, QA, and security teams to streamline workflows and enhance automation Lead DevOps best practices and mentor junior engineers Optimize costs, performance, and scalability of infrastructure Ensure compliance with security standards and best practices Requirements 5+ years of experience in DevOps, SRE, or related roles Strong experience with cloud platforms (AWS, Azure, GCP) Proficiency with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Expertise in container orchestration (Kubernetes, Helm) Solid experience with infrastructure-as-code (Terraform, CloudFormation, Ansible) Good knowledge of monitoring/logging tools (Prometheus, Grafana, ELK, Datadog) Strong scripting skills (Bash, Python, or Go)

Posted 1 month ago

Apply

3.0 - 5.0 years

1 - 3 Lacs

Chennai

Work from Office

**AWS Infrastructure Management:** Design, implement, and maintain scalable, secure cloud infrastructure using AWS services (EC2, Lambda, S3, RDS, Cloud Formation/Terraform, etc.) Monitor and optimize cloud resource usage and costs **CI/CD Pipeline Automation:** Set up and maintain robust CI/CD pipelines using tools such as GitHub Actions, GitLab CI, Jenkins, or AWS Code Pipeline Ensure smooth deployment processes for staging and production environments **Git Workflow Management:** Implement and enforce best practices for version control and branching strategies (Gitflow, trunk-based development, etc.) Support development teams in resolving Git issues and improving workflows **Twilio Integration & Support:** Manage and maintain Twilio-based communication systems (SMS, Voice, WhatsApp, Programmable Messaging) Develop and deploy Twilio Functions and Studio Flows for customer engagement Monitor communication systems and troubleshoot delivery or quality issues **Infrastructure as Code & Automation:** Use tools like Terraform, Cloud Formation, or Pulumi for reproducible infrastructure Create scripts and automation tools to streamline routine DevOps tasks **Monitoring, Logging & Security:** Implement and maintain monitoring/logging tools (Cloud Watch, Datadog, ELK, etc.) Ensure adherence to best practices around IAM, secrets management, and compliance **Requirements** 3-5+ years of experience in DevOps or a similar role Expert-level experience with **Amazon Web Services (AWS)** Strong command of **Git** and Git-based CI/CD practices Experience building and supporting solutions using **Twilio APIs** (SMS, Voice, Programmable Messaging, etc.) Proficiency in scripting languages (Bash, Python, etc.) Hands-on experience with containerization (Docker) and orchestration tools (ECS, EKS, Kubernetes) Familiarity with Agile/Scrum workflows and collaborative development environments **Preferred Qualifications** AWS Certifications (e.g., Solutions Architect, DevOps Engineer) Experience with serverless frameworks and event-driven architectures Previous work with other communication platforms (e.g., SendGrid, Nexmo) a plus Knowledge of RESTful API development and integration Experience working in high-availability, production-grade systems

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

What You'll Do Were hiring a Site Reliability Engineer to help build and maintain the backbone of Avalaras SaaS platforms. As part of our global Reliability Engineering team, youll play a key role in ensuring the performance, availability, and observability of critical systems used by millions of users. This role combines hands-on infrastructure expertise with modern SRE practices and the opportunity to contribute to the evolution of AI-powered operations. Youll work closely with engineering and operations teams across regions to drive automation, improve incident response, and proactively detect issues using data and machine learning. What Your Responsibilities Will Be Own the reliability and performance of production systems across multiple environments and multiple clouds (AWS, GCP, OCI). Use AI/ML-driven tools and automation to improve observability and incident response. Collaborate with development teams on CI/CD pipelines, infrastructure deployments, and secure practices. Perform root cause analysis, drive postmortems, and reduce recurring incidents. Contribute to compliance and security initiatives (SOX, SOC2, ISO 27001, access and controls). Participate in a global on-call rotation and knowledge-sharing culture. What You'll Need to be Successful 5+ years in SRE, DevOps, or infrastructure engineering roles. Expertise with AWS (GCP or OCI is a plus), AWS Certified Solutions Architect Associate or equivalent Strong scripting/programming skills (Python, Go, Bash, or similar) Experience with infrastructure as code (Terraform, CloudFormation, Pulumi). Proficiency in Linux environments, containers (Docker/Kubernetes), and CI/CD workflows. Strong written and verbal communications skills to support world wide collaboration.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies