Jobs
Interviews

181 Cloudwatch Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

12 - 17 Lacs

Pune, Chennai, Jaipur

Work from Office

We are hiring an experienced Python Developer for a contractual role with a leading global digital services client via Awign Expert. The role requires hands-on development experience with AWS, PySpark, Lambdas, CloudWatch, SNS, SQS, and CloudFormation. The developer will work on real-time data integrations using various data formats and will manage streaming data via Kinesis and implement autoscaling strategies. The ideal candidate is a strong individual contributor, a collaborative team player, and possesses excellent problem-solving and communication skills. This is a high-impact opportunity for someone passionate about cloud-native Python applications and scalable architectures. Location: Bengaluru,Hyderabad,Chandigarh,Indore,Nagpur,Gurugram,Jaipur,Chennai,Pune,Mangalore

Posted 1 month ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Pune

Hybrid

*****GenAI DevOps Engineer AWS Bedrock***** *****Pune Hinjewadi***** *****Immediate Joiners Preferred***** *****Minimum 4 Days WFO***** Job Description: We are seeking a highly experienced GenAI DevOps Engineer to join our dynamic team in Pune. The ideal candidate will have a strong background in building, deploying, and optimizing Generative AI applications on AWS Bedrock, along with expertise in DevOps practices. You will be responsible for automating infrastructure, managing CI/CD pipelines, and ensuring high performance and reliability of AI models. Key Responsibilities: Design, develop, and deploy Generative AI applications leveraging AWS Bedrock and SageMaker. Automate infrastructure provisioning and deployment processes. Build and maintain robust CI/CD pipelines using CodePipeline and CodeBuild. Monitor application and model performance using CloudWatch, Prometheus, and Grafana. Optimize AI models for performance, scalability, and cost-efficiency. Work with RAG (Retrieval-Augmented Generation) tools such as LangChain, Haystack, and LlamaIndex for building advanced AI solutions. Collaborate with data scientists and developers to streamline model deployment and monitoring. Required Skills: Extensive hands-on experience with AWS Bedrock and SageMaker. Strong expertise in CI/CD tools: CodePipeline, CodeBuild. Proficiency with monitoring tools: CloudWatch, Prometheus, Grafana. Experience with RAG frameworks like LangChain, Haystack, and LlamaIndex. Solid understanding of DevOps best practices and automation. Ability to troubleshoot and optimize AI deployment pipelines. Excellent problem-solving and communication skills. Preferred Skills: Knowledge of containerization (Docker, Kubernetes). Familiarity with scripting languages (Python, Bash). Experience with cloud security best practices. Understanding of machine learning lifecycle management. Mandatory Skills: AWS Bedrock and SageMaker expertise. CI/CD pipeline automation. Monitoring and performance optimization. RAG-based application development.

Posted 1 month ago

Apply

3.0 - 4.0 years

20 - 25 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

3-4 years hands-on experience on AWS services, ideally SaaS in the cloud Experience developing solutions with code/scripting language must have Python experience (e.g, python, Node.js) Experience in creating and configuring AWS resources like API Gateway, CloudWatch, Cloud-Formation, EC2, Lambda, Amazon Connect, SNS, Athena, Glue, VPC etc.Sourcing & Screening US profiles Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad ,Remote

Posted 1 month ago

Apply

6.0 - 8.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Picture Yourself at Pega: As a Senior Cloud Security Operations Analyst, you will play a critical role in ensuring the confidentiality, integrity, and availability of Pega's commercial cloud infrastructure and assets. You will be key in the continuous monitoring and protection of all global cloud security operations at Pega as well as an active participant in incident response efforts. As a key member of a team consisting of highly capable and talented problem-solving analysts and engineers, you'll help develop processes that drive proactive, automated detection and incident response tactics to support the quick resolution of cloud security events and incidents. You will accomplish this by collaborating with cross-functional teams including other security analysts, threat detection engineers, vulnerability analysts, security engineers, system administrators, and developers to proactively identify potential security risks and vulnerabilities within our cloud environment. You will leverage your strong analytical skills to assess and prioritize threats, applying your knowledge of industry best practices and cloud security frameworks. As a Senior Cloud Security Operations Analyst at Pega, you'll contribute to the success of our globally recognized brand. Your efforts will directly impact the security and trust our clients place in us, as we help them transform their business processes and drive meaningful digital experiences. So, picture yourself at Pega, where your expertise in cloud security is valued, and your passion for protecting data is celebrated. join us in shaping the future of secure cloud operations and make a lasting impact on the world of technology. What You'll Do at Pega: Perform security monitoring of Pega Cloud commercial environments using multiple security tools/dashboards Perform security investigations to identify indicators of compromise (IOCs) and better protect Pega Cloud and our clients from unauthorized or malicious activity Actively contribute to incident response activities as we identify, contain, eradicate, and recover Contribute to standard operating procedure (SOP) and policy development for CSOC detection and analysis tools and methodologies Assist in enhancing security incident response plans, conducting thorough investigations, and recommending remediation measures to prevent future incidents. Perform threat hunts for adversarial activities within Pega Cloud to identify evidence of attacker presence that may have not been identified by existing detection mechanisms Assist the threat detection team in developing high confidence Splunk notables focused on use cases for known and emerging threats, based on hypotheses derived from the Pega threat landscape Assist in the development of dashboards, reports, and other non-alert based content to maintain and improve situational awareness of Pega Cloud's security posture Assist in the development of playbooks for use by analysts to investigate both high confidence and anomalous activity Who You Are: You have an insatiable curiosity with an inborn tenacity for finding creative ways to deter, detect, deny, delay, and defend against bad actors of all shapes and sizes. You have been in the security trenches and you know what an efficient security operations center looks like. You have conducted in-depth analyses of various security events/alerts, contributed to incident response efforts, and developed new methods for detecting and mitigating badness wherever you see it. You bring a wealth of cloud security experience to the table and are ready to harness that expertise to dive into cloud-centric, technical analysis and incident response to make Pega Cloud the most secure it can be. You have a history of success in the information security industry. Your list of accolades include : SANS, Offensive Security, or other top-tier industry recognized technical security certifications focused on analysis, detection, and/or incident response Industry recognition for identifying security gaps to secure applications or products What You've Accomplished: Minimum of 6+ years of industry-relevant experience, with a demonstrated working knowledge of cloud architecture, infrastructure, and resources, along with the associated services, threats, and mitigations. Minimum of 4+ years in operational SIEM (Security Information and Event Management) roles, focusing on analysis, investigations, and incident response, with experience in Google Chronicle SIEM being an added advantage. 3+ years of operational cloud security experience preferably AWS and/or GCP including knowledge and analysis of various cloud logs such as CloudTrail, Cloud Audit, GuardDuty, Security Command Center, CloudWatch, Cloud Ops, Trusted Advisor, Recommender, VPCFIow, and WAF logs. 4+ years of operational experience with EDR/XDR platforms and related analysis and response techniques Operational experience performing investigations and incident response within Linux and Windows hosts as well as AWS, GCP, and related Kubernetes environments (EKS/GKE) Solid working knowledge of MITRE ATT&CK framework and the associated TTP's and how to map detections against it, particularly the cloud matrix portion Familiarity with the OWASP Top 10 vulnerabilities and best practices for mitigating these security risks. A solid foundational understanding of computer, OS (Linux/Windows), and network architecture concepts, and various related exploits/attacks Experience developing standard operating procedures (SOPs), incident response plans, runbooks/playbooks for repeated actions, and security operations policies Experience with Python, Linux shell/bash, and PowerShell scripting is a plus Excellent verbal and written communication skills, including poise in high pressure situations A demonstrated ability to work in a team environment and foster a healthy, productive team culture A Bachelor's degree in Cybersecurity, Computer Science, Data Science, or related field

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 13 Lacs

Chennai, Bengaluru

Work from Office

Job Description Role Snowflake DevOps Engineer Visit our website bmwtechworks.in to know more. Follow us on LinkedIn I Instagram I Facebook I X for the exciting updates. Location Bangalore/ Chennai Experience: 2 to 8 years Number of openings 4 What awaits you/ Job Profile Supporting Snowflakes BMW side Use Case Customers Consulting for Use Case Onboarding Monitoring of Data Synchronization jobs (Reconciliation between Cloud Data Hub and Snowflake) Cost Monitoring Reports to Use Cases Further technical implementations like M2M Authentication for applications, data traffic over VPC Integrate Snowflake to Use Case application process within Data Portal (automated Use Case Setup triggered by Data Portal) Technical Documentation Executing standard service requests (service user lifecycle etc.) Compiling of user and operational manuals Organizing and documenting knowledge regarding incidents/customer cases in a knowledge base Enhancing and editing process documentation Ability and willingness to coach and give training to fellow colleagues and users when Ability to resolve 2nd level incidents within the Data Analytics Platform (could entail basic code changes) Close collaboration with 3rd Level Support/Development and SaaS vendor teams Implementation of new development changes and assist and contribute to the development needs. What should you bring along Strong understanding and experience with Python AWS IAM, S3, KMS, Glue, Cloudwatch. Github Understanding of APIs Understanding of Software Development and background in Business Intelligence SQL (Queries, DDL, Materialized Views, Tasks, Procedures, Optimization) Any Data Portal or Cloud Data Hub Experience A technical background in operating and supporting IT Platforms IT Service Management (according to ITIL) 2nd Level Support Strong understanding of Problem, Incident and Change processes High Customer Orientation Working in a highly complex environment (many stakeholders, multi-platform/product environment, mission-critical use cases, high business exposure, complex ticket routing) Flexible communication on multiple support channels (ITSM, Teams, email) Precise and diligent execution of ops processes Working OnCall (Standby) Mindset of Continuous Learning (highly complex software stack with changing features) Proactive in Communication Must have technical skill Snowflake, Python, Lambda, IAM, S3, KMS, Glue, CloudWatch, Terraform, Scrum Good to have Technical skills AWS VPC, Route53, Bridgeevent, SNS, Cloudtrail, Confluence, Jira

Posted 1 month ago

Apply

7.0 - 10.0 years

45 - 50 Lacs

Pune

Work from Office

Requirements: Our client is seeking a highly skilled Technical Project Manager (TPM) with strong hands-on experience in full-stack development and cloud infrastructure to lead the successful planning, execution, and delivery of technical projects. The ideal candidate will have a strong background in React, Java, Spring Boot, Python, and AWS, and will work closely with cross-functional teams including developers, QA, DevOps, and product stakeholders. As a TPM, you will play a critical role in bridging technical and business objectives, ensuring timelines, quality, and scalability across complex software projects. Responsibilities : - Own and drive the end-to-end lifecycle of technical projects-from initiation to deployment and post-launch support. - Collaborate with development teams and stakeholders to define project scope, goals, deliverables, and timelines. - Act as a hands-on contributor when needed, with the ability to guide and review code and architecture decisions. - Coordinate cross-functional teams across front-end (React), back-end (Java/Spring Boot, Python), and AWS cloud infrastructure. - Manage risk, change, and issue resolution in a fast-paced agile environment. - Ensure projects follow best practices around version control, CI/CD, testing, deployment, and monitoring. - Deliver detailed status updates, sprint reports, and retrospectives to leadership and stakeholders. Required Qualifications : - IIT /NIT graduate with 5+ years of experience in software engineering, with at least 2 years in a technical project management role. - Hands-on expertise in : React Java & Spring Boot Python AWS (EC2, S3, Lambda, CloudWatch, etc.) - Experience leading agile/Scrum teams with strong understanding of software development lifecycles. - Excellent communication, organizational, and interpersonal skills. Desired Profile : - Experience designing and managing Microservices architectures. - Familiarity with Kafka or other messaging systems. - Knowledge of CI/CD pipelines, deployment strategies, and application monitoring tools (e.g., Prometheus, Grafana, CloudWatch). - Experience with containerization tools like Docker and orchestration platforms like Kubernetes.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Chennai

Work from Office

Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Preferably Hyderabad/ Chennai, India

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership withData Analysts and Architects with guidance from Lead Software Engineer. Innovatewithnew approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practicessuch as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.

Posted 1 month ago

Apply

10.0 - 15.0 years

11 - 20 Lacs

Bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title: Senior AWS Engineer Experience: 10 to 15 years Key Responsibilities : Design and implement AWS-based infrastructure solutions for scalability, performance, and security. Lead cloud architecture discussions, guiding development and operations teams in best practices. Automate infrastructure provisioning using tools like Terraform, CloudFormation, or AWS CDK. Implement and manage CI/CD pipelines (e.g., Jenkins, CodePipeline, GitHub Actions). Ensure cost optimization, monitoring, and governance for AWS accounts. Collaborate with security teams to enforce compliance and governance policies across cloud environments. Handle migration of on-premise workloads to AWS cloud (rehost, replatform, refactor). Provide mentorship to junior engineers and participate in code reviews and design sessions. Maintain high availability, disaster recovery, and backup strategies. Stay updated with the latest AWS services and architecture trends. Technical Skills: Strong hands-on experience with core AWS services: EC2, S3, RDS, Lambda, IAM, VPC, CloudWatch, CloudTrail, ECS/EKS, etc. Expert in Infrastructure as Code (IaC) using Terraform , CloudFormation , or AWS CDK . Strong scripting and automation skills in Python , Bash , or Shell . Experience with containerization and orchestration tools ( Docker , Kubernetes /EKS). Solid understanding of networking, load balancing, and security concepts in the cloud. Experience with monitoring/logging tools like CloudWatch , Prometheus , Grafana , or ELK stack . Knowledge of DevOps and CI/CD tools (Jenkins, GitLab CI, AWS CodePipeline, etc.). Familiarity with Agile/Scrum methodologies NOTE : Only immediate and 15 days joiners Notice period : Only immediate and 15 days joiners Location: Bangalore Mode of Work : WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership with Data Analysts and Architects with guidance from Lead Software Engineer. Innovatewithnew approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practicessuch as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Remote

Hiring for European Cyber Security Company. Role & responsibilities: Act as the highest point of technical escalation within the team for cloud infrastructure, platform and application related issues. Troubleshoot and resolve complex incidents involving AWS services, containerized environments, application availability and customer raised tickets. Ensure that the application uptime is 99.99% and customer tickets are promptly answered. Own major incident response: coordinate efforts across teams, lead root cause analysis (RCA), and deliver clear post-mortem reports. Monitor system health and security events using advanced tools. Collaborate with Care Efficiency Leads and Engineering teams to improve observability and automation. Maintain internal runbooks, playbooks, and documentation for recurring or high-risk scenarios. Implement proactive improvements to reduce MTTR (mean time to resolution) and eliminate recurring incidents. Support change management processes and participate in planned maintenance windows. Mentor other Care engineers, encouraging a culture of knowledge-sharing and continuous improvement. Participate in 24/7 coverage including on-call shifts or rotations as needed. Preferred candidate profile: 5+ years of experience in NOC, or SRE roles, with at least 2 years in a Tier 3 or senior escalation role. AWS Architecture Expertise : In-depth knowledge of AWS architecture principles, security best practices, and advanced services relevant to cloud infrastructure. Containerization Knowledge : Experience with Docker and container orchestration using Amazon ECS/EKS to maintain microservices architecture. Advanced Monitoring Solutions : Hands-on experience with Dynatrace, Datadog, CloudWatch, or similar tools. Expertise in implementing and optimizing monitoring solutions for complex, distributed cybersecurity infrastructure. Linux System Administration : Strong command of Linux environments with ability to perform system hardening aligned with security best practices. Scripting & Automation : Proficiency in Bash, Python, or PowerShell scripting for automating routine tasks, log analysis, and creating incident response procedures. Development of complex automation workflows to reduce MTTR and eliminate manual processes. CI/CD Pipeline Expertise : Experience with continuous integration/continuous deployment pipelines for security products. Multi-Cloud Strategy : Experience with hybrid or multi-cloud environments to support diverse infrastructure needs. Performance Optimization : Advanced skills in tuning cloud resources for optimal performance while maintaining uptime requirement. Operational Skills: Technical Leadership : Ability to lead technical response teams during critical incidents affecting partners. Mentorship : Skills in mentoring L1 and L2 engineers on specific technologies and processes. Root Cause Analysis : Expert-level problem investigation and root cause analysis skills to prevent recurring issues in environment. Process Improvement : Ability to identify inefficiencies in operational processes and implement improvements to enhance partner support. Cross-team Collaboration : Strong collaboration skills to work effectively with Engineering, Product, and Partner teams during complex incidents. Bonus Points For: Business Impact Awareness : Deep understanding of how technical issues impact customer business operations and end-users. Custom Partner Solutions : Experience with customized deployments for high-tier partners and ability to troubleshoot complex partner-specific configurations.

Posted 1 month ago

Apply

7.0 - 9.0 years

10 - 12 Lacs

Noida, Chennai, Bengaluru

Work from Office

Notice Period: Immediate Joiners Preferred Primary Skills: AWS Cloud Architecture & Services Microservices Development & Architecture IoT Platforms and Protocols (MQTT, CoAP, etc.) Mobile Application Support (iOS, Android) Web Application Support (ReactJS, Angular, VueJS, etc.) RESTful API Design and Development Containerization (Docker, Kubernetes) CI/CD pipelines Monitoring & Logging (CloudWatch, ELK Stack, Grafana) Troubleshooting & Performance Tuning We are seeking an experienced AWS Microservices / IoT / Mobile & Web App Support Engineer to join our growing team in Noida. The ideal candidate will have a strong background in designing and supporting scalable cloud-native solutions and IoT systems, along with hands-on experience in providing ongoing support for web and mobile applications. Key Responsibilities: Design, develop, and support microservices-based architecture on AWS Provide end-to-end support and maintenance for IoT solutions Troubleshoot and optimize performance of Mobile & Web Applications Collaborate with DevOps to implement CI/CD pipelines Ensure high availability, scalability, and security of applications Manage containerized applications (Docker, Kubernetes) Proactively monitor system performance and implement improvements Work closely with cross-functional teams to resolve incidents and deploy enhancements Maintain detailed documentation for architectures and processes Preferred Qualifications: AWS Certification (Solutions Architect, Developer, or SysOps) is a plus Experience with AWS IoT Core, Greengrass Familiarity with Serverless Architectures (Lambda, API Gateway) Strong understanding of Cloud Security and IAM Policies Excellent communication and teamwork skills Please share your profile with the following details: Current CTC: Expected CTC: Notice Period: Total Experience: Relevant AWS / Microservices / IoT / Mobile & Web App Support Experience: Preferred Location:

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 15 Lacs

Hyderabad, Secunderabad

Work from Office

Hands-on experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Knowledge of Terraform, CloudFormation, or other infrastructure automation tools. Experience with Docker, and basic knowledge of Kubernetes. Familiarity with monitoring/logging tools such as CloudWatch, Prometheus, Grafana, ELK.

Posted 1 month ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

We are looking for a skilled and proactive AWS Operational Support Analyst to join our cloud infrastructure team. The ideal candidate will be responsible for monitoring, maintaining, and improving the performance, security, and reliability of AWS-hosted environments. This role is essential in ensuring uninterrupted cloud operations and supporting DevOps, development, and business teams with cloud-related issues. Key Responsibilities Monitor AWS cloud infrastructure for performance, availability, and operational issues. Manage incident response, root cause analysis, and resolution of infrastructure-related issues. Execute daily operational tasks including backups, system patching, and performance tuning. Collaborate with DevOps and engineering teams to ensure smooth CI/CD operations. Maintain system documentation and support knowledge base. Automate routine tasks using shell scripts or AWS tools (e.g., Lambda, Systems Manager). Manage AWS services such as EC2, RDS, S3, CloudWatch, IAM, and VPC. Implement cloud cost-optimization practices and security compliance controls. Perform health checks, generate reports, and suggest performance improvements.

Posted 1 month ago

Apply

5.0 - 8.0 years

12 - 22 Lacs

Bengaluru

Work from Office

Role & responsibilities - Manage and monitor AWS cloud infrastructure, including EC2, S3, VPC, RDS, Lambda, and more. - Implement and maintain Ubuntu Linux servers and applications. - Monitor system performance, conduct backups, and address potential issues. - Set up and maintain MySQL databases, optimizing performance and ensuring data integrity. - Collaborate with development teams to design, develop, and deploy secure cloud-based applications. - Implement and maintain cloud security best practices. - Provide technical support and guidance on cloud infrastructure and related technologies. - Stay updated on industry trends and best practices. Preferred candidate profile - Bachelor's degree in Computer Science, IT, or related field. - 5-8 years of overall experience, with a minimum of 3 years in AWS cloud services. - Strong Ubuntu Linux administration skills. - Familiarity with AWS services and cloud security best practices. - Strong problem-solving skills and the ability to work independently and in a team. - Excellent communication skills. - Basic understanding of MySQL database administration is a plus. - Relevant AWS certifications are a plus.

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

We are seeking a Senior DevOps Engineer to build pipeline automation, integrating DevSecOps principles and operations of product build and releases. Mentor and guide DevOps teams, fostering a culture of technical excellence and continuous learning. What You'll Do Design & Architecture: Architect and implement scalable, resilient, and secure Kubernetes-based solutions on Amazon EKS. Deployment & Management: Deploy and manage containerized applications, ensuring high availability, performance, and security. Infrastructure as Code (IaC): Develop and maintain Terraform scripts for provisioning cloud infrastructure and Kubernetes resources. CI/CD Pipelines: Design and optimize CI/CD pipelines using tools like Jenkins, GitHub Actions, GitLab CI/CD, or ArgoCD along with automated builds, tests (unit, regression), and deployments. Monitoring & Logging: Implement monitoring, logging, and alerting solutions using Prometheus, Grafana, ELK stack, or CloudWatch. Security & Compliance: Ensure security best practices in Kubernetes, including RBAC, IAM policies, network policies, and vulnerability scanning. Automation & Scripting: Automate operational tasks using Bash, Python, or Go for improved efficiency. Performance Optimization: Tune Kubernetes workloads and optimize cost/performance of Amazon EKS clusters. Test Automation & Regression Pipelines - Integrate automated regression testing and build sanity checks into pipelines to ensure high-quality releases. Security & Resource Optimization - Manage Kubernetes security (RBAC, network policies) and optimize resource usage with Horizontal Pod Autoscalers (HPA) and Vertical Pod Autoscalers (VPA) . Collaboration: Work closely with development, security, and infrastructure teams to enhance DevOps processes. Minimum Qualifications Bachelor's degree (or above) in Engineering/Computer Science. 8+ years of experience in DevOps, Cloud, and Infrastructure Automation in a DevOps engineer role. Expertise with Helm charts, Kubernetes Operators, and Service Mesh (Istio, Linkerd, etc.) Strong expertise in Amazon EKS and Kubernetes (design, deployment, and management) Expertise in Terraform, Jenkins and Ansible Expertise with CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD, etc.) Strong experience with monitoring and logging tools (Prometheus, Grafana, ELK, CloudWatch) Proficiency in Bash, Python, for automation and scripting

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 18 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are hiring an experienced Integration Engineer with deep expertise in Dell Boomi and proven skills in Python, AWS, and automation frameworks. This role focuses on building and maintaining robust integration pipelines between enterprise systems like Salesforce, Snowflake, and EDI platforms, enabling seamless data flow and test automation. Key Responsibilities: Design, develop, and maintain integration workflows using Dell Boomi. Build and enhance backend utilities and services using Python to support Boomi integrations. Integrate test frameworks with AWS services such as Lambda, API Gateway, CloudWatch, etc. Develop utilities for EDI document automation (e.g., generating and validating EDI 850 purchase orders). Perform data syncing and transformation between systems like Salesforce, Boomi, and Snowflake. Automate post-test data cleanup and validation within Salesforce using Boomi and Python. Implement infrastructure-as-code using Terraform to manage cloud resources. Create and execute API tests using Postman, and automate test cases using Cucumber and Gherkin. Integrate test results into Jira and X-Ray for traceability and reporting. Must-Have Qualifications: 5 to 7 years of professional experience in software or integration development. Strong hands-on experience with Dell Boomi (Atoms, Integration Processes, Connectors, APIs). Solid programming experience with Python. Experience working with AWS services: Lambda, API Gateway, CloudWatch, S3, etc. Working knowledge of Terraform for cloud infrastructure automation. Familiarity with SQL and modern data platforms (e.g., Snowflake). Experience working with Salesforce and writing SOQL queries. Understanding of EDI document standards and related integration use cases. Test automation experience using Cucumber, Gherkin, Postman. Integration of QA/test reports with Jira, X-Ray, or similar platforms. Familiarity with CI/CD tools like GitHub Actions, Jenkins, or similar. Tools & Technologies: Integration: Dell Boomi, REST/SOAP APIs Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, S3) Infrastructure: Terraform Data Platforms: Snowflake, Salesforce Automation & Testing: Cucumber, Gherkin, Postman DevOps: Git, GitHub Actions Tracking/Reporting: Jira, X-Ray Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad

Posted 1 month ago

Apply

6.0 - 9.0 years

6 - 16 Lacs

Bengaluru

Hybrid

Role & responsibilities Should have good experince in Aws Admin, AWS services like Cloudwatch/EC2/S3 WFO-Immediate Exp 6 to 9 Yrs Location:Bangalore

Posted 1 month ago

Apply

3.0 - 7.0 years

8 - 12 Lacs

Hyderabad

Work from Office

We are seeking a motivated and skilled Cloud Engineer to join our growing technology team in Noida. In this role, you will be responsible for building, deploying, and managing our cloud infrastructure and applications on platforms such as AWS, Azure, or GCP. You will work closely with development and operations teams to ensure the reliability, scalability, and security of our cloud environment. This is an excellent opportunity to contribute to the implementation and optimization of our cloud strategy, learn new technologies, and play a key role in our digital transformation journey. Responsibilities : - Build and maintain cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation, Azure Resource Manager (ARM) templates, or Google Cloud Deployment Manager. - Deploy and manage applications and services on cloud platforms, ensuring high availability and performance. - Implement and maintain monitoring, logging, and alerting systems for cloud resources and applications. - Automate routine tasks and processes related to cloud infrastructure and application deployments. - Implement and enforce security best practices in the cloud environment, working closely with the security team. - Troubleshoot and resolve issues related to cloud infrastructure and application performance. - Collaborate with development teams to design and implement cloud-native solutions. - Optimize cloud costs by identifying and implementing efficient resource utilization strategies. - Participate in the planning and execution of cloud migration projects. - Contribute to the development and maintenance of documentation for cloud infrastructure and processes. - Stay up-to-date with the latest cloud technologies and best practices. - Participate in disaster recovery and business continuity planning for cloud environments. - Assist in the implementation and adherence to cloud governance policies. Required Skills & Qualifications : - Cloud Platforms: Hands-on experience (3-7 years) with at least one major cloud platform (AWS, Azure, or GCP). - Infrastructure as Code (IaC): Experience with IaC tools such as Terraform, AWS CloudFormation, ARM templates, or Google Cloud Deployment Manager. Scripting : - Proficiency in at least one scripting language such as Python, Bash, or PowerShell for automation. Operating Systems: Strong understanding of Linux and/or Windows Server operating systems. Networking Fundamentals: Solid understanding of networking concepts, including TCP/IP, DNS, VPNs, and firewalls. Monitoring & Logging: Experience with cloud monitoring and logging tools (e.g., CloudWatch, Azure Monitor, Google Cloud Monitoring). Security Basics: Understanding of cloud security principles and best practices. Version Control (Git): Familiarity with Git for version control. Problem-Solving: Strong analytical and problem-solving skills. Communication: Good verbal and written communication skills.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Hyderabad, Bengaluru

Hybrid

1. 5+ Years of work experience in Python and AWS Stack 2. Candidate should be a good hands-on Experience in AWS technologies (S3, DynamoDB, Lambda, API Gateway, App Sync, Route53, CloudFormation, Event bridge, IAM, Glue, Athena) . 3. Candidate should have experience in Python 3.9 or new versions 4. Experience with DevOps concepts, tools and continuous delivery pipelines - Bamboo, Bitbucket, JIRA, GIT , etc. 5. Experience with product health monitoring in test and production environments using Honeycomb/Splunk and CloudWatch . 6. Experience with API design standards, patterns, and frameworks (e.g., REST, GraphQL, OpenAPI, OAuth ). 7. Operates in a highly collaborative team emphasizing best practices in cloud development, testing, DevSecOps and SRE . 8. Build cloud-first, consumer-focused and applying lean agile methodologies. Candidate should have good experience in Unit testing (pytest). 9. Candidate should have hands-on experience with python libraries and frameworks such as Flask, SQLAlchemy, AWS-boto3 10. Experience in coding, testing, debugging, implementing , and documenting applications 11. Should have good database (RDBMS, sql queries, joins) skills 12. Strong understanding of front-end WEB technologies (JavaScript, HTML and CSS etc). 13.Candidate should have experience REST and UI to backend integration. 16. Should have good verbal & written communication skills. 17. Should be able to work on the Agile methodology. 18. Investment Management domain knowledge is an added advantage.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

We are looking for an experienced Staff BT Site Reliability Engineer to join our Business Technology team to build, improve, and maintain our cloud platform services. The Site Reliability Engineering team builds foundational back-end infrastructure services and tooling for Okta s corporate teams. We enable teams to build infrastructure at scale and automate their software reliably and predictably. SREs are team players and innovators who build and operate technology using best practices and an agile mindset. We are looking for a smart, innovative, and passionate engineer for this role, someone who is interested in designing and implementing complex cloud-based infrastructure. This is a lean and agile team, and the ideal candidate welcomes the challenge of building in a dynamic and ever changing environment. They enjoy seeing their designs run at scale with automation, testing, and an excellent operational mindset. If you exemplify the ethics of, "If you have to do something more than once, automate it," we want to hear from you! Responsibilities Build and run development tools, pipelines, and infrastructure with a security-first mindset Actively participate in Agile ceremonies, write stories, and support team members through demos, knowledge sharing, and architecture sessions Promote and apply best practices for building secure, scalable, and reliable cloud infrastructure Develop and maintain technical documentation, network diagrams, runbooks, and procedures Designing, building, running, and monitoring Okta's IT infrastructure and cloud services Driving initiatives to evolve our current cloud platforms to increase efficiency and keep it in line with current security standards and best practices Recommend, develop, implement, and manage appropriate policy, standards, processes, and procedural updates Working with software engineers to ensure that development follows established processes and works as intended Create and maintain centralized technical processes, including container and image management Provide excellent customer service to our internal users and be an advocate for SRE services and DevOps practices Qualifications 10+ years of experience as a SRE, DevOps, Systems Engineer, or equivalent Demonstrated ability to develop complex applications for cloud infrastructure at scale and deliver projects on schedule and within budget Proficient in managing AWS multi-account environments and AWS authentication, governance, and using org management suite, including, but not limited to, AWS Orgs, AWS IAM, AWS Identity Center, and Stacksets Proficient with automating systems and infrastructure via Terraform Proficient in developing applications running on AWS or other cloud infrastructure resources, including compute, storage, networking, and virtualization Proficient with Git and building deployment pipeline using commercial tools, especially Github Actions Proficient with developing tooling and automation using Python Proficient with AWS container based workloads and concepts, especially EKS, ECS, and ECR. Experience with monitoring tools, especially Splunk, Cloudwatch, and Grafana Experience with reliability engineering concepts and security best practices on public cloud platforms Experience with image creation and management, especially for container and EC2 based workloads Experience with Github Actions Runner Controller self-hosted runners Knowledgeable with Linux system administration skills Knowledgeable of configuration management tools, such as Ansible and SSM Good communication skills, with the ability to influence others and communicate complex technical concepts to different audiences

Posted 1 month ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 11 The Role: Lead Software Engineering The Team: Our team is responsible for the architecture, design, development, and maintenance of technology solutions to support the Sustainability business unit within Market Intelligence and other divisions. Our program is built on a foundation of inclusivity, enablement, and adaptability and respect which fosters an environment of open-communication and trust. We take pride in each team members accountability and responsibility to move us forward in our strategic initiatives. Our work is collaborative, we work transparently with others within our business unit and others across the entire organization. The Impact: As a Lead, Cloud Engineering at S&P Global, you will be instrumental in streamlining the software development and deployment of our applications to meet the needs of our business. Your work ensures seamless integration and continuous delivery, enhancing the platform's operational capabilities to support our business units. You will collaborate with software engineers and data architects to automate processes, improve system reliability, and implement monitoring solutions. Your contributions will be vital in maintaining high availability security and performance standards, ultimately leading to the delivery of impactful, data-driven solutions. Whats in it for you: Career Development: Build a meaningful career with a leading global company at the forefront of technology. Dynamic Work Environment: Work in an environment that is dynamic and forward-thinking, directly contributing to innovative solutions. Skill Enhancement: Enhance your software development skills on an enterprise-level platform. Versatile Experience: Gain full-stack experience and exposure to cloud technologies. Leadership Opportunities: Mentor peers and influence the products future as part of a skilled team. Key Responsibilities: Design and develop scalable cloud applications using various cloud services. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud security best practices and ensure compliance with industry standards. Monitor and optimize application performance and reliability in the cloud environment. Troubleshoot and resolve issues related to our applications and services. Stay updated with the latest cloud technologies and trends. Manage our cloud instances and their lifecycle, to guarantee a high degree of reliability, security, scalability, and confidence at any given time. Design and implement CI/CD pipelines to automate software delivery and infrastructure changes. Collaborate with development and operations teams to improve collaboration and productivity. Manage and optimize cloud infrastructure and services. Implement configuration management tools and practices. Ensure security best practices are followed in the deployment process. What Were Looking For: Bachelor's degree in Computer Science or a related field. Minimum of 10+ years of experience in a cloud engineering or related role. Proven experience in cloud development and deployment. Proven experience in agile and project management. Expertise with cloud services (AWS, Azure, Google Cloud). Experience in EMR, EKS, Glue, Terraform, Cloud security, Proficiency in programming languages such as Python, Java, Scala, Spark Strong Implementation experience in AWS services (e.g. EC2, ECS, ELB, RDS, EFS, EBS, VPC, IAM, CloudFront, CloudWatch, Lambda, S3. Proficiency in scripting languages such as Bash, Python, or PowerShell. Experience with CI/CD tools like Azure CI/CD. Experience in SQL and MS SQL Server. Knowledge of containerization technologies like Docker, Kubernetes. Nice to have - Knowledge of GitHub Actions, Redshift and machine learning frameworks Excellent problem-solving and communication skills. Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Hyderabad

Work from Office

Excellent knowledge of C# with Multi-threading, Async. Good command over OOP concepts, SOLID principles, Design patterns. Experience in REST API/Microservices using ASP.Net WebAPI. Only ASP.Net MVC is not acceptable . Securing web applications using Authentication Tokens, Certificates, OAuth etc. Caching, Distributed caching. Pub/Sub, Queues/Topics Message Broker, Any one Queuing system. SQL Server knowledge, Joins, Stored Procedures, Functions, writing complex queries. SSIS/SSRS Experience on Lambda, CloudWatch, API Gateway, S3 Bucket, EC2, SNS, SQS, ELB, Docker/Kubernetes & Kafka MQ, IAM, Authorization, and Access control, SaaS etc.

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

We are seeking a Senior Python Developer with strong experience in AWS, Terraform, and automation frameworks. The ideal candidate will be responsible for building and integrating tools, utilities, and test automation processes across cloud and enterprise systems, including Salesforce, Dell Boomi, and Snowflake. Key Responsibilities: Design, develop, and maintain Python-based tools and services for automation and integration. Develop and manage infrastructure using Terraform and deploy resources on AWS. Automate internal backend processes including EDI document generation and Salesforce data cleanup. Integrate test automation frameworks with AWS services like Lambda, API Gateway, CloudWatch, and more. Implement and maintain automated test cases using Cucumber, Gherkin, and Postman. Collaborate with QA and DevOps teams to improve testing coverage and CI/CD automation. Work with tools such as Jira, X-Ray, and GitHub Actions for test tracking and version control. Develop utilities for integrations between Salesforce, Boomi, AWS, and Snowflake. Must-Have Qualifications: 5 to 7 years of hands-on experience in software development or test automation. Strong programming skills in Python. Solid experience working with AWS services (Lambda, API Gateway, CloudWatch, etc.). Proficiency with Terraform for managing infrastructure as code. Experience with REST API development and integration. Experience with Dell Boomi, Salesforce and SOQL. Knowledge of SQL (preferably with platforms like Snowflake). Knowledge of EDI formats and automation. Nice-to-Have Skills: Experience in BDD tools like Cucumber, Gherkin. Test management/reporting with X-Ray, integration with Jira. Exposure to version control and CI/CD workflows (e.g., GitHub, GitHub Actions). Tools & Technologies: Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, etc.) IaC: Terraform Automation/Testing: Cucumber, Gherkin, Postman Data & Integration: Snowflake, Salesforce, Dell Boomi DevOps: Git, GitHub Actions Tracking: Jira, X-Ray Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 month ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are seeking a highly skilled Python Developer with Strong AWS & Terraform experience. The ideal candidate must possess strong Python development capabilities, robust hands-on experience with AWS, and a working knowledge of Terraform. This role also requires foundational SQL skills and the ability to integrate and automate various backend and cloud services. Requirements and Qualifications: 57+ years of overall experience in software development Strong proficiency in Python development Extensive experience working with AWS services (Lambda, API Gateway, CloudWatch, etc.) Hands-on experience with Terraform Basic understanding of SQL Experience with REST APIs, Salesforce SOQL Familiarity with tools and platforms such as Git, GitHub Actions, Dell Boomi, Snowflake Knowledge of QA Automation using Python, Cucumber, Gherkin, Postman Roles and Responsibilities: Integrate automation frameworks with AWS, X-Ray, and Boomi services Develop backend automation scripts for Boomi processes Build utility tools for Salesforce data cleanup and EDI document generation Create and manage automated triggers in the test framework using AWS services (Lambda, API Gateway, etc.) Develop utilities for internal EDI processes integrating third-party applications (Salesforce, Dell Boomi, AWS, Snowflake, X-Ray) Integrate utilities into Cucumber Automation Framework Connect automation framework with Jira and X-Ray for test reporting Automate test cases for various EDI processes Collaborate on development and integration using Python, REST APIs, AWS, and other modern tools Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,Remote

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies