Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary ¿ Develop and maintain CI/CD pipelines using GitHub Actions to streamline the software development lifecycle. ¿ Design, deploy, and manage AWS infrastructure, ensuring high availability and security. ¿ Implement and manage Helm Charts for Kubernetes to automate the deployment of applications. ¿ Utilize YAML configuration files for defining and managing infrastructure and application settings. ¿ Apply SRE principles to enhance system reliability, performance, and capacity through automation and monitoring. ¿ Collaborate with development teams to integrate reliability and scalability into the software development process. ¿ Monitor application and infrastructure performance, troubleshoot issues, and implement solutions to improve system reliability. ¿ Implement infrastructure as code (IaC) using tools like Terraform for efficient resource management. Required Skills and Qualifications ¿ Proven experience in Site Reliability Engineering (SRE) practices. ¿ Strong expertise in GitHub Actions and Terraform for CI/CD pipeline development. ¿ Strong knowledge of YAML, its code structures, parameterization for configuration management. ¿ Working experience with AWS services, including EC2, S3, Lambda, RDS, and VPC. Deeper understanding of authentication, security, scalability, parallelization of GitHub Actions/Jobs across the CICD process. ¿ Working experience in Helm Charts for Kubernetes deployment and management. ¿ Proficiency in scripting and automation using languages such as Python or PowerShell. ¿ Understanding of containerization technologies like Docker and orchestration with Kubernetes. ¿ Excellent problem solving skills and ability to work collaboratively in a fast paced environment. ¿ Strong communication and collaboration skills.
Posted 5 days ago
140.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About NCR VOYIX NCR VOYIX Corporation (NYSE: VYX) is a leading global provider of digital commerce solutions for the retail, restaurant and banking industries. NCR VOYIX is headquartered in Atlanta, Georgia, with approximately 16,000 employees in 35 countries across the globe. For nearly 140 years, we have been the global leader in consumer transaction technologies, turning everyday consumer interactions into meaningful moments. Today, NCR VOYIX transforms the stores, restaurants and digital banking experiences with cloud-based, platform-led SaaS and services capabilities. Not only are we the leader in the market segments we serve and the technology we deliver, but we create exceptional consumer experiences in partnership with the world’s leading retailers, restaurants and financial institutions. We leverage our expertise, R&D capabilities and unique platform to help navigate, simplify and run our customers’ technology systems. Our customers are at the center of everything we do. Our mission is to enable stores, restaurants and financial institutions to exceed their goals – from customer satisfaction to revenue growth, to operational excellence, to reduced costs and profit growth. Our solutions empower our customers to succeed in today’s competitive landscape. Our unique perspective brings innovative, industry-leading tech to all the moving parts of business across industries. NCR VOYIX has earned the trust of businesses large and small — from the best-known brands around the world to your local favorite around the corner. Key Responsibilities Develop and deploy automated processes for the issuance, renewal, revocation, and monitoring of digital certificates across various platforms. Collaborate with cross-functional teams to integrate certificate management solutions into existing infrastructure, including cloud, on-premises, and hybrid environments. Implement and maintain automations scripts and tools using platforms such as Hashicorp Vault, Venafi, or similar certificate management systems. Monitor and manage the health of digital certificates to prevent expirations and ensure compliance with security policies. Manage and mentor a team of engineers responsible for certificate management, providing technical guidance and professional development opportunities. Troubleshoot and resolve issues related to certificate management, including SSL/ TLS configurations, certificate chains, and trust stores. Create and maintain comprehensive documentation for automated certificate management processes, configurations, and best practices. Stay updated with latest trends in PKI, cryptography, and security automation to continuously improve the organization’s certificate management strategy. Work closely with security and compliance teams to ensure that all certificate management practices meet regulatory and internal security requirements. Lead incident response efforts related to certificate management issues, ensuring minimal disruptions to the services. Lead the design, implementation, and maintenance of automated certificate management solutions to support the organization’s security infrastructure. Oversee the lifecycle management of digital certificates, ensuring time. Complex troubleshooting, Root cause analysis, performance tuning, Tuning, diagnostics, and maintenance of IT security related Equipment Ensuring adherence to process. Following the SLA’s and procedures already defined for security device management. Procedures and KB, known incident resolution, Known Error handling. Hands on experience and ability to do Root cause analysis, Problem & Capacity Management As an active member of the team, monitor and process response for security events on a 24x7 basis. Support 24/7 operations Skills And Qualifications Minimum of 8 years of experience in certificate management, PKI, or related fields. Proven experience in automating certificate management processes using tools like HashiCorp Vault, Venafi, or similar. Strong understanding of cryptographic protocols (SSL/ TLS), certificate authorities, and digital signatures. Experience with scripting and automation using languages such as Python, PowerShell, or Bash. Familiarity with DevOps practices and automation tools – Ansible, Terraform, Jenkins. Relevant certifications (e.g. CISSP, CEH, Cloud – Azure, GCP, AWS) are a plus. Experience leading and mentoring technical teams, with a demonstrated ability to manage multiple projects simultaneously. Ability to assimilate, understand and utilize various security technologies. Strong attention to detail Ability to deal with ambiguity and translate high level objectives into detailed tasks. Ability to prioritize work with multiple, simultaneous work assignments. Ability and willingness to learn new tools and processes. Experience documenting business processes or technical procedures preferred. Excellent communication and interpersonal skills, with the ability to articulate complex concepts to non-technical stakeholders. Experience with managing certificate authorities and HSMs (Hardware Security Modules). Offers of employment are conditional upon passage of screening criteria applicable to the job EEO Statement Integrated into our shared values is NCR Voyix’s commitment to diversity and equal employment opportunity. All qualified applicants will receive consideration for employment without regard to sex, age, race, color, creed, religion, national origin, disability, sexual orientation, gender identity, veteran status, military service, genetic information, or any other characteristic or conduct protected by law. NCR Voyix is committed to being a globally inclusive company where all people are treated fairly, recognized for their individuality, promoted based on performance and encouraged to strive to reach their full potential. We believe in understanding and respecting differences among all people. Every individual at NCR Voyix has an ongoing responsibility to respect and support a globally diverse environment. Statement to Third Party Agencies To ALL recruitment agencies: NCR Voyix only accepts resumes from agencies on the preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Voyix employees, or any NCR Voyix facility. NCR Voyix is not responsible for any fees or charges associated with unsolicited resumes “When applying for a job, please make sure to only open emails that you will receive during your application process that come from a @ncrvoyix.com email domain.”
Posted 5 days ago
5.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a talented Backend Developer to join our dynamic team at Max Healthcare in India and help build scalable, high-performance web applications. Requirements: 5-7 years of software engineering experience Strong problem-solving skills and the ability to work in a dynamic, fast-paced environment. Full stack development experience, with a focus on backend-end technologies (80/20 split). We mostly write in node js but are flexible in our approach. In-depth knowledge of AWS. Exposure to Terraform is a plus. Excellent communication and teamwork skills.
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: Devops Engineer Location: Pune Duration: Contract to Hire Job Description: • Design, implement, and manage cloud infrastructure using Terraform on Azure (and AWS where required). • Automate and orchestrate Azure infrastructure components with a focus on scalability, security, and cost optimization. • Leverage Azure Data Services such as Data Factory, Synapse, and Databricks for cloud data platform tasks. • Optimize and manage database workloads with SQL/PLSQL and query optimization techniques. • Implement and maintain CI/CD pipelines using tools such as Azure DevOps and GitHub Actions. • Manage and support multi-cloud environments, ensuring seamless operations and integration. • Troubleshoot infrastructure and application issues across cloud platforms with effective scripting and automation. • Drive adoption of IaC practices and contribute to continuous improvement of DevOps workflows.
Posted 5 days ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Manager, AI Platform We are seeking a result-driven AI ML Platform Manager with a strong background in cloud technologies ( AWS, Azure) to lead the strategic development and delivery of enterprise-grade AI/ML platforms. This role is pivotal in enabling scalable, secure, and resilient business applications, integrating cloud-based systems, and driving digital transformation initiatives. This role will lead teams to achieve performance objectives and provide deep insights into best practices for solving complex problems. Key Responsibilities Include Lead an platform org of Cloud Administrators, Support Engineers, GenAI Ops Engineers and GenAI architects Collaborate with business unit heads, PMOs, and product managers to translate requirements into reliable platform capabilities Leading engagement delivery and managing client relationships on daily basis Standardize platform services across cloud and on-prem environments, ensuring alignment with enterprise architecture Accountable for program/project management and engagement economics Implement cost optimization and performance tuning for cloud workloads Lead cross-functional teams in developing APIs, integrations, and microservices that support data flow across systems Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Implement observability tools (e.g., Fiddler, Datadog, Prometheus, Splunk) across enterprise workloads Enforce zero trust principles, encryption standards, and cloud security baselines Strong knowledge on micro-services deployment architecture with K8S experience Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10+ years of experience in enterprise platform tools with 4 years of strong AI ML platform experience and 6+ years of cloud experience Proven experience in managing infrastructure and workloads on AWS, Azure, or GCP Strong communication and stakeholder management skills, with the ability to collaborate effectively across diverse teams and functions Strong understanding of business, budget, vendor management, financial management, team management. Requisition ID: 610749 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Position: Sr Data Operations Years of Experience – 6-8 Years Job Location: S.B Road –Pune, For other locations (Remote) The Position We are seeking a seasoned engineer with a passion for changing the way millions of people save energy. You’ll work within the Deliver and Operate team to build and improve our platforms to deliver flexible and creative solutions to our utility partners and end users and help us achieve our ambitious goals for our business and the planet. We are seeking a highly skilled and detail-oriented Software Engineer II for Data Operations team to maintain our data infrastructure, pipelines, and work-flows. You will play a key role in ensuring the smooth ingestion, transformation, validation, and delivery of data across systems. This role is ideal for someone with a strong understanding of data engineering and operational best practices who thrives in high-availability environments. Responsibilities & Skills You should: Monitor and maintain data pipelines and ETL processes to ensure reliability and performance. Automate routine data operations tasks and optimize workflows for scalability and efficiency. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Collaborate with data engineering, analytics, and DevOps teams to support data infrastructure. Implement monitoring, alerting, and logging systems for data pipelines. Maintain and improve data governance, access controls, and compliance with data policies. Support deployment and configuration of data tools, services, and platforms. Participate in on-call rotation and incident response related to data system outages or failures. Required Skills : 5+ years of experience in data operations, data engineering, or a related role. Strong SQL skills and experience with relational databases (e.g., PostgreSQL, MySQL). Proficiency with data pipeline tools (e.g., Apache Airflow). Experience with cloud platforms (AWS, GCP) and cloud-based data services (e.g., Redshift, BigQuery). Familiarity with scripting languages such as Python, Bash, or Shell. Knowledge of version control (e.g., Git) and CI/CD workflows. Qualifications Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Experience with data observability tools (e.g., Splunk, DataDog). Background in DevOps or SRE with focus on data systems. Exposure to infrastructure-as-code (e.g., Terraform, CloudFormation). Knowledge of streaming data platforms (e.g., Kafka, Spark Streaming).
Posted 5 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : GCP Compute PaaS Engineer Job Location : Chennai, Hyderabad, Bangalore, Pune, Gurgaon Experience : 8 + Years SRE , GCP Devops , Terraform , Kubernetes , Istio , Python Requirements: Kubernetes General -SRE / service experience covering key components of GCP platform -Good understanding of hub and spoke topology -Knowledge of security practices for cloud-native applications, including identity and access management, and network security -Proficient in using monitoring tools to ensure system reliability -Ability to automate repetitive tasks using scripting languages like Python, Bash, or PowerShell -Track record of implementing toil reduction solutions -Experience of working in an Agile framework K8 specifics -Deep understanding of Kubernetes architecture, deployment, and management -Istio knowledge e.g. configuration, interaction with networking solutions -Hands on knowledge of other AKS add-ons e.g. RBAC manager, Kured, Cert Manager -Hands on experience of using IaC to deploy/maintain AKS resources Good to have skills: Any Kubernetes certificate (KCSA, CKA, CKAD, CKSS) GCP professional Certificate
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Specialty Development Senior 34263 Location: Chennai Employment Type: Full-Time (Hybrid) Job Overview We are looking for an experienced GCP Data Engineer to join a global data engineering team responsible for building a sophisticated data warehouse and analytics platform on Google Cloud Platform (GCP) . This role is ideal for professionals with a strong background in data engineering, cloud migration, and large-scale data transformation , particularly within cloud-native environments. Key Responsibilities Design, build, and optimize data pipelines on GCP to support large-scale data transformations and analytics. Lead the migration and modernization of legacy systems to cloud-based architecture. Collaborate with cross-functional global teams to support data-driven applications and enterprise analytics solutions. Work with large datasets to enable platform capabilities and business insights using GCP tools. Ensure data quality, integrity, and performance across the end-to-end data lifecycle. Apply agile development principles to rapidly deliver and iterate on data solutions. Promote engineering best practices in CI/CD, DevSecOps, and cloud deployment strategies. Must-Have Skills GCP Services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud Composer, Cloud Functions, Cloud SQL, Cloud Spanner, Cloud Storage, Bigtable, Pub/Sub, App Engine, Compute Engine, Airflow Programming & Data Engineering: 5+ years in data engineering and SQL development; experience in building data warehouses and ETL processes Cloud Experience: Minimum 3 years in cloud environments (preferably GCP), implementing production-scale data solutions Strong understanding of data processing architectures (batch/real-time) and tools such as Terraform, Cloud Build, and Airflow Experience with containerized microservices architecture Excellent problem-solving skills and ability to optimize complex data pipelines Strong interpersonal and communication skills with the ability to work effectively in a globally distributed team Proven ability to work independently in high-ambiguity scenarios and drive solutions proactively Preferred Skills GCP Certification (e.g., Professional Data Engineer) Experience in regulated or financial domains Migration experience from Teradata to GCP Programming experience with Python, Java, Apache Beam Familiarity with data governance, security, and compliance in cloud environments Experience coaching and mentoring junior data engineers Knowledge of software architecture, CI/CD, source control (Git), and secure coding standards Exposure to Java full-stack development (Spring Boot, Microservices, React) Agile development experience including pair programming, TDD, and DevSecOps Proficiency in test automation tools like Selenium, Cucumber, REST Assured Familiarity with other cloud platforms like AWS or Azure is a plus Education Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: python,gcp certification,microservices architecture,terraform,airflow,data processing architectures,test automation tools,sql development,cloud environments,agile development,ci/cd,gcp services: bigquery, dataflow, dataproc, data fusion, cloud composer, cloud functions, cloud sql, cloud spanner, cloud storage, bigtable, pub/sub, app engine, compute engine, airflow,apache beam,git,communication,problem-solving,data engineering,analytics,data,data governance,etl processes,gcp,cloud build,java
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
AWS Architect Exp:8+yrs Np: Immediate only Location: PAN India What we are looking for are candidates with certifications such as AWS Solutions Architect – Associate, AWS Solutions Architect – Professional or SysOps Administrator Associate Design and manage centralized networking using a Hub-and-Spoke model with AWS Transit Gateway. Knowledge of firewall functionality and troubleshooting is preferred. Hands-on experience in troubleshooting cloud instance connectivity or launch issues using AWS CloudTrail is preferred. Strong problem-solving skills, clear articulation when communicating issues within the team, and effective collaboration with others are essential. Be available during critical business periods or holidays when necessary. Work with AWS Support to resolve complex or escalated issues. Review and remediate organization-level Trusted Advisor findings in coordination with application owners. Adopt Infrastructure as Code (IaC) using CloudFormation or Terraform with version control integration. Implement hybrid DNS and networking strategies using Amazon Route 53. Provide root cause analysis (RCA) for unavailability of cloud-managed servers or unexpected shutdown. Apply security patches and OS updates on EC2 instances using AWS Systems Manager (SSM). Monitor and address findings from Amazon GuardDuty and AWS Security Hub. Test backup and recovery scenarios for EC2 and RDS by performing restore operations. Manage deployments across multiple AWS regions and accounts.
Posted 5 days ago
3.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job ID: Pyt-ETP-Pun-1075 Location: Pune Company Overview Bridgenext is a Global consulting company that provides technology-empowered business solutions for world-class organizations. Our Global Workforce of over 800 consultants provide best in class services to our clients to realize their digital transformation journey. Our clients span the emerging, mid-market and enterprise space. With multiple offices worldwide, we are uniquely positioned to deliver digital solutions to our clients leveraging Microsoft, Java and Open Source with a focus on Mobility, Cloud, Data Engineering and Intelligent Automation. Emtec’s singular mission is to create “Clients for Life” – long-term relationships that deliver rapid, meaningful, and lasting business value. At Bridgenext, we have a unique blend of Corporate and Entrepreneurial cultures. This is where you would have an opportunity to drive business value for clients while you innovate and continue to grow and have fun while doing it. You would work with team members who are vibrant, smart and passionate and they bring their passion to all that they do – whether it’s learning, giving back to our communities or always going the extra mile for our client. Position Description We are looking for members with hands-on Data Engineering experience who will work on the internal and customer-based projects for Bridgenext. We are looking for someone who cares about the quality of code and who is passionate about providing the best solution to meet the client needs and anticipate their future needs based on an understanding of the market. Someone who worked on Hadoop projects including processing and data representation using various AWS Services. Must Have Skills 3-4 years of overall experience Strong programming experience with Python Experience with unit testing, debugging, and performance tuning. Experience with Docker, Kubernetes, and cloud platforms (AWS preferred) Experience with CI/CD pipelines and DevOps best practices. Familiarity with workflow management tools like Airflow. Experience with DBT is a plus. Good to have experience with infrastructure-as-a-code technologies such as Terraform, Ansible Good to have experience in Snowflake modelling – roles, schema, databases. Professional Skills Solid written, verbal, and presentation communication skills Strong team and individual player Maintains composure during all types of situations and is collaborative by nature High standards of professionalism, consistently producing high-quality results Self-sufficient, independent requiring very little supervision or intervention Demonstrate flexibility and openness to bring creative solutions to address issues
Posted 5 days ago
10.0 years
0 Lacs
India
Remote
Job description #hiring #Senior Backend Developer Min Experience: 10+ Years Location: Remote We are seeking a highly experienced Technical Lead with over 10 years of experience, including at least 2 years in a leadership role, to guide and mentor a dynamic engineering team. This role is critical to designing, developing, and optimizing high-performance, scalable, and reliable backend systems. The ideal candidate will have deep expertise in Python (Flask), AWS (Lambda, Redshift, Glue, S3), Microservices, and Database Optimization (SQL, RDBMS). We operate in a high-performance environment, comparable to leading product companies, where uptime, defect reduction, and data clarity are paramount. As a Technical Lead, you will ensure engineering excellence, maintain high-quality standards, and drive innovation in software architecture and development. Key Responsibilities: · Own backend architecture and lead the development of scalable, efficient web applications and microservices. · Ensure production-grade AWS deployment and maintenance with high availability, cost optimization, and security best practices. · Design and optimize databases (RDBMS, SQL) for performance, scalability, and reliability. · Lead API and microservices development, ensuring seamless integration, scalability, and maintainability. · Implement high-performance solutions, emphasizing low latency, uptime, and data accuracy. · Mentor and guide developers, fostering a culture of collaboration, disciplined coding, and technical excellence. · Conduct technical reviews, enforce best coding practices, and ensure adherence to security and compliance standards. · Drive automation and CI/CD pipelines to enhance deployment efficiency and reduce operational overhead. · Communicate technical concepts effectively to technical and non-technical stakeholders. · Provide accurate work estimations and align development efforts with broader business objectives. Key Skills: Programming: Strong expertise in Python (Flask) and Celery. AWS: Core experience with Lambda, Redshift, Glue, S3, and production-level deployment strategies. Microservices & API Development: Deep understanding of architecture, service discovery, API gateway design, observability, and distributed systems best practices. Database Optimization: Expertise in SQL, PostgreSQL, Amazon Aurora RDS, and performance tuning. CI/CD & Infrastructure: Experience with GitHub Actions, GitLab CI/CD, Docker, Kubernetes, Terraform, and CloudFormation. Monitoring & Logging: Familiarity with AWS CloudWatch, ELK Stack, and Prometheus. Security & Compliance: Knowledge of backend security best practices and performance optimization. Collaboration & Communication: Ability to articulate complex technical concepts to international stakeholders and work seamlessly in Agile/Scrum environments. 📩 Apply now or refer someone great. Please share your updated resume to hr.team@kpitechservices.com #PythonJob #jobs #BackendDeveloper
Posted 5 days ago
10.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills and Qualifications: 10-14 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.
Posted 5 days ago
7.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities Include Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!
Posted 5 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink. Build scalable and automated MLOps pipelines for model training, validation, and deployment using AWS SageMaker and related services. Implement and manage Infrastructure as Code (IaC) using Terraform for AWS provisioning and maintenance. Collaborate with ML, Data Science, and DevOps teams to ensure reliable and efficient model deployment workflows. Optimize data storage and retrieval strategies for both structured and unstructured large-scale datasets. Integrate and transform data from multiple sources into data lakes and data warehouses. Monitor, troubleshoot, and improve performance of cloud-native data systems in a fast-paced production setup. Ensure compliance with data governance, privacy, and security standards across all data operations. Document data engineering workflows and architectural decisions for transparency and maintainability. Requirements 5+ Years of experience as Data Engineer or in similar role Proven experience in building data pipelines and streaming applications using Apache Kafka and Apache Flink. Strong ETL development skills, with deep understanding of data modeling and data architecture in large-scale environments. Hands-on experience with AWS services, including SageMaker, S3, Glue, Lambda, and CloudFormation or Terraform. Proficiency in Python and SQL; knowledge of Java is a plus, especially for streaming use cases. Strong grasp of MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. Deep knowledge of IaC tools, particularly Terraform, for automating cloud infrastructure. Excellent analytical and problem-solving abilities, especially with regard to data processing and deployment issues. Agile mindset with experience working in fast-paced, iterative development environments. Strong communication and team collaboration skills.
Posted 5 days ago
3.0 - 6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Sia is a next-generation, global management consulting group. Founded in 1999, we were born digital. Today our strategy and management capabilities are augmented by data science, enhanced by creativity and driven by responsibility. We’re optimists for change and we help clients initiate, navigate and benefit from transformation. We believe optimism is a force multiplier, helping clients to mitigate downside and maximize opportunity. With expertise across a broad range of sectors and services, our consultants serve clients worldwide. Our expertise delivers results. Our optimism transforms outcomes. Heka.ai is the independent brand of Sia Partners dedicated to AI solutions. We host many AI-powered SaaS solutions that can be combined with consulting services or used independently, to provide our customers with solutions at scale. Job Description We are looking for a skilled Senior Software Engineer to play a key role in our front-end development using ReactJS. This role involves enhancing user interface components and implementing well-conceived designs into our AI-powered SaaS solutions. You will collaborate with backend teams and designers to ensure seamless application performance and a high-quality user experience. Key Responsibilities Front-End Development: Develop and optimize sophisticated user interfaces using ReactJS. Ensure technical feasibility of UI/UX designs. Performance Optimization: Enhance application performance on the client side by implementing state management solutions and optimizing component rendering. Cross-Browser Compatibility: Ensure that applications perform consistently across different browsers and platforms. Collaboration: Work closely with backend developers and web designers to meet technical and consumer needs. Code Integrity: Maintain and improve code quality through writing unit tests, automation, and performing code reviews. Infrastructure as Code (IaC): Utilize Terraform and Helm to manage cloud infrastructure, ensuring scalable and efficient deployment environments. Cloud Deployment & CI Management: Work with GCP / AWS / Azure for deploying and managing applications in the cloud. Oversee continuous software integration processes including tests writing and artifacts building. Qualifications Education: Bachelor’s/master's degree in computer science, Software Engineering, or a related field. Experience: 3-6 years of experience in frontend development, with significant expertise in ReactJS. Skills: Expertise in ReactJS, NextJS, and Node.js. Experience with REST and GraphQL APIs. Proficient in JavaScript, TypeScript, and HTML/CSS. Familiar with Git, CI/CD, and Figma. Strong knowledge of micro-frontends, accessibility standards, and APM tools. Familiar with newer specifications of ECMAScript. Knowledge of isomorphic React is a plus. Infrastructure as Code (IaC) skills with Terraform and Helm for efficient cloud infrastructure management. Hands-on experience in deploying and managing applications using GCP, AWS, or Azure. Ability to understand business requirements and translate them into technical requirements. Additional Information What We Offer: Opportunity to lead cutting-edge AI projects in a global consulting environment. Leadership development programs and training sessions at our global centers. A dynamic and collaborative team environment with diverse projects. Sia is an equal opportunity employer. All aspects of employment, including hiring, promotion, remuneration, or discipline, are based solely on performance, competence, conduct, or business needs.
Posted 5 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Join us as a Cloud & DevOps Engineer at Dedalus, one of the global leaders in healthcare technology – working from our Noida Office in India to shape the future of digital health infrastructure. What you’ll achieve As a Cloud & DevOps Engineer, you will play a key role in building and maintaining a scalable, secure, and resilient platform to support continuous integration, delivery, and operations of modern healthcare applications. Your work will directly contribute to enabling development teams to deliver better, faster, and safer solutions for patients and providers around the world. You will: Design and maintain tooling for deployment, monitoring, and operations of containerized applications across hybrid cloud and on-premises infrastructure Implement and manage Kubernetes-based workloads ensuring high availability, scalability, and security Develop new platform features using Go or Java, and maintain existing toolchains Automate infrastructure provisioning using IaC tools such as Terraform, Helm, or Ansible Collaborate with cross-functional teams to enhance platform usability and troubleshoot issues Participate in incident response and on-call rotation to ensure uptime and system resilience Create and maintain architecture and process documentation for shared team knowledge Take the next step towards your dream career At DH Healthcare, your work will empower clinicians and health professionals to deliver better care through reliable and modern technology. Join us and help shape the healthcare landscape by enabling the infrastructure that powers mission-critical healthcare systems. Here’s what you’ll need to succeed: Essential Requirements 5+ years of experience in DevOps, Cloud Engineering, or Platform Development roles Strong background in software engineering and/or system integrations Proficiency in Go, Java, or similar languages Hands-on experience with containerization and orchestration (Docker, Kubernetes) Experience with CI/CD pipelines and DevOps methodologies Practical knowledge of IaC tools like Terraform, Helm, Ansible Exposure to Linux, Windows, and cloud-native environments Strong written and verbal communication skills in English Bachelor’s degree in Computer Science, Information Systems, or equivalent Desirable Requirements Experience supporting large-scale or enterprise healthcare applications Familiarity with Agile/Scrum practices and DevSecOps tools Exposure to hybrid infrastructure and cloud operations Enthusiastic about automation, security, and performance optimization Passion for continuous improvement and collaboration We are DH Healthcare – Come join us At DH Healthcare, we are committed to transforming care delivery through smart, scalable, and resilient platforms. We value innovation, collaboration, and a deep sense of purpose in everything we do. You will join a global team dedicated to improving patient outcomes and supporting health professionals with technology that truly matters. With a team of 7,600+ professionals across 40+ countries, we believe that every role – including yours – helps deliver better healthcare to millions across the globe. If you’re ready to be part of something meaningful, apply now. Application closing date: 18th of August 2025 Read more about our Diversity & Inclusion Commitment
Posted 5 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job description: We are seeking a highly experienced and strategic Senior Manager Cloud engineering to lead our Noida SRE and Cloud Engineering teams and drive the evolution of our infrastructure, CI/CD pipelines, and cloud operations. This role is ideal for a hands-on leader who thrives in a fast-paced environment and is passionate about automation, scalability, and reliability who can collaborate and communicate effectively. Key Responsibilities: Leadership & Strategy Lead and mentor a team of DevOps teams, fostering a culture of collaboration, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business goals and engineering best practices. Collaborate with software engineering, QA, and product teams to ensure seamless integration and deployment. Infrastructure & Automation Oversee the design, implementation, and maintenance of scalable cloud infrastructure (AWS). * Drive automation of infrastructure provisioning, configuration management, and deployment processes. Ensure high availability, performance, and security of production systems. CI/CD & Monitoring Architect and maintain robust CI/CD pipelines to support rapid development and deployment cycles. Implement monitoring, logging, and alerting systems to ensure system health and performance. Manage incident response and root cause analysis for production issues. Governance & Compliance Ensure compliance with security policies, data protection regulations, and industry standards. Develop and enforce operational best practices, including disaster recovery and business continuity planning. Qualifications: Bachelor’s or master’s degree in computer science, Engineering, or related field. 8+ years of experience in DevOps, Site Reliability Engineering, or Infrastructure Engineering and understanding of best practices. 5+ years in a leadership or managerial role. Expertise in AWS and infrastructure-as-code tools (Terraform, CloudFormation). Strong experience with CI/CD tools (Jenkins, GitHub CI, Tekton) and container orchestration (Docker, Kubernetes). Proficiency in scripting languages (Python, Bash, Go, PowerShell). Excellent communication, problem-solving, and project management skills. Problem-solving mindset with a focus on continuous improvement. Preferred Qualification s: Certifications in cloud technologies (AWS Certified DevOps Engineer, etc.). Experience with security and compliance frameworks (SOC 2, ISO 27001). Experience with Agile methodologies and familiarity DevSecOps practices. Experience with managing .Net environments and Kubernetes clusters
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Cloud Network Design: Design, implement, and manage network architectures for cloud environments, ensuring high availability, performance, and security across cloud platforms GCP Network Architecture mandatory. Network Configuration & Management: Configure and manage cloud networking services such as Virtual Private Cloud (VPC), subnets, IP addressing, routing, VPNs, and DNS. Connectivity and Integration: Develop and maintain connectivity solutions between on-premise networks and cloud environments, including hybrid cloud configurations and Direct Connect/ExpressRoute solutions. Security & Compliance: Implement and enforce network security policies, including firewall rules, access control lists (ACLs), and VPNs, ensuring compliance with industry standards and best practices. Network Monitoring & Troubleshooting: Continuously monitor cloud network performance, identify issues, and troubleshoot network-related problems to minimize downtime and ensure smooth operation. Performance Optimization: Analyze network performance and recommend optimizations to reduce latency, improve bandwidth utilization, and enhance overall network efficiency in the cloud. Collaboration & Documentation: Collaborate with cloud architects, DevOps teams, and other stakeholders to ensure network architecture aligns with business goals. Document network designs, configurations, and operational procedures. Automation & Scripting: Leverage automation tools and scripting languages (e.g., Python, Bash, or Terraform) to automate network configuration, provisioning, and monitoring tasks. Support & Maintenance: Provide ongoing support for cloud network infrastructure, including regular updates, patches, and configuration adjustments as needed. Disaster Recovery & Continuity: Ensure that cloud network solutions are resilient and can recover quickly in the event of network failures or disasters, including implementing DR (disaster recovery) strategies for network infrastructure. Must have GCP Cloud Network Architecture most recent experience or hands on at least 5+ Years.
Posted 5 days ago
0 years
0 Lacs
India
Remote
DevOps Engineer – GCP Automation 4 Yrs Senior DevOps Engineer – GCP Automation (3-Month Contract, Immediate Start) Contract: Time & Material · Remote/India flexible Mission: Build a fully automated GCP provisioning service that launches complete, secure customer environments in ≤ 10 minutes. Core Duties Terraform IaC: Modules for GKE, Cloud Storage, Cloud DNS/SSL, IAM; manage state & Git. Automation: Cloud Build / Cloud Functions workflows; Bash/Python scripts; Helm-based Node.js deployments; auto-SSL. Security: Least-privilege IAM, RBAC, audit logging, monitoring. Integration: Webhook endpoint, status tracking, error handling; docs & runbooks. Must-Have Google Cloud cert (ACE min; DevOps/Architect preferred) – include ID. 3+ yrs GCP, expert Terraform, production GKE, Bash & Python, CI/CD (Cloud Build), strong IAM/security. Self-starter able to deliver solo on tight deadlines. Nice-to-Have REST APIs, multi-tenant design, Node.js, Docker, Helm. Deliverables Month 1: Terraform modules, working prototype, basic security, webhook. Month 2: Prod-ready system (<10 min), full docs, knowledge transfer
Posted 5 days ago
3.0 years
0 Lacs
India
Remote
Frontend Engineer (Bangalore) 📍 Location: Bangalore, in-person only 💰 Salary: 20L – 40L annual + 📈 Equity (founding-engineer tracks available) ⌛ Experience : 3+ years in frontend (Backend Experience will be a plus point) 💻 Skills : TypeScript, React, Tanstack, UI Libraries (Tailwind, Shadcn), Testing (Integration and performance optimization), LLM Frameworks(AI SDKs), GCP/AWS/Azure, Websocket, RPCs About the role As a Frontend Engineer at Runable , you’ll play a key role in shaping the user-facing layer of our general automation platform. You’ll work closely with our backend and infra teams to build fast, intuitive, and resilient interfaces that abstract away system complexity and deliver seamless AI-powered automation to our users. What You'll Do LLM & Agent Services: • Build intuitive interfaces to interact with multi-agent workflows using LangChain, LangGraph, OpenAI SDK, etc. • Design frontend components that support real-time AI orchestration , multi-step flows, and streaming responses Frontend Development & UI Engineering: • Develop rich, performant web apps using React , TypeScript , Tailwind , and component libraries like Shadcn • Integrate and support document viewers and editors for Excel , PDF , Markdown , and more • Build cross-platform experiences with React Native + Expo for mobile use cases Cloud & DevOps: • Deploy and manage infrastructure on GCP, AWS, or Azure using Terraform • Author CI/CD pipelines for seamless delivery and rollback Experimental Innovation (15–20% time): • Explore cutting-edge LLM fine-tuning, memory architectures, and new agent frameworks What we are Looking For 3+ years of frontend engineering experience Proficiency in React and TypeScript , plus some backend and infra knowledge Experience integrating and building rich document views — including Excel , PDF , and Markdown , with editing support Exposure to mobile app development using React Native + Expo Hands-on experience with LLM frameworks (LangChain, OpenAI SDK, etc.) and multi-agents Familiarity with popular UI libraries like Tailwind , Shadcn , and state/data tools like TanStack Query Strong UI/UX sensibility with a deep understanding of user behavior , flow design, and intuitive interactions Expertise in networking , load-balancers , and high-performance remote connections Familiarity with Terraform/OpenTofu , CI/CD , and cloud platforms (GCP/AWS/Azure) Excited to work in-person from our Bangalore office in a fast-paced, collaborative environment Job Task (To filter who is not excited enough to try new things, it's a cool task those who loves frontend will love this challenge) https://runable.notion.site/frontend-engineer-task?source=copy_link
Posted 5 days ago
4.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Purpose of the Role We’re looking for a Platform Engineer to lead the design and development of internal self-service workflows and automation for our internal developer platform. This role will: Build reusable workflows using Go, empowering developers to provision infrastructure, deploy applications, manage secrets, and operate at scale without needing to become Kubernetes or cloud experts Drive platform standardization and codification of best practices across cloud infrastructure, Kubernetes, and CI/CD Create developer friendly APIs and experiences while maintaining a high bar for reliability, observability, and performance Design, develop, and maintain Go-based platform tooling and self-service automation that simplifies infrastructure provisioning, application deployment, and service management. Write clean, testable code and workflows that integrate with our internal systems such as GitLab, ArgoCD, Port, AWS, and Kubernetes. Partner with product engineering, SREs, and cloud teams to identify high-leverage platform improvements and enable adoption across brands. Mandatory Skills 4 - 6 years of experience in a professional cloud computing role with Kubernetes, Docker and Infra-as-Code. A BA/BS in Computer Science or equivalent work experience Exposure on Cloud/DevOps/SRE/Platform Engineering roles. Proficient in Golang for backend automation and system tooling. Experience operating in Kubernetes environments and building automation for multi-tenant workloads. Deep experience with AWS (or equivalent cloud provider), infrastructure as code (e.g., Terraform), and CI/CD systems like GitLab CI. Strong understanding of containers, microservice architectures, and modern DevOps practices. Familiarity with GitOps practices using tools like ArgoCD, Helm, and Kustomize. Strong debugging and troubleshooting skills across distributed systems.
Posted 5 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Project Manager – IT Infrastructure & DevOps Location: Chennai, India Experience: 8+ Years Employment Type: Full-time Job Summary: We are seeking an experienced and proactive Project Manager to lead IT Infrastructure and DevOps projects. The ideal candidate will be responsible for planning, executing, and closing projects efficiently while managing cross-functional teams. Excellent communication and leadership skills are essential for this role. Key Responsibilities: Manage end-to-end IT infrastructure and DevOps projects, ensuring timely delivery within scope and budget. Coordinate with internal teams, vendors, and stakeholders to define project goals, deliverables, and timelines. Oversee infrastructure upgrades, cloud migrations, server provisioning, network operations, and system integrations. Lead CI/CD pipeline implementation, automation, monitoring, and maintenance initiatives. Identify and mitigate project risks and dependencies. Ensure compliance with IT security policies and industry standards. Maintain comprehensive project documentation and reporting. Communicate project status, escalations, and updates effectively to stakeholders and senior leadership. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. Minimum 8 years of experience in IT Infrastructure and DevOps projects, with at least 3 years in a project management role. Strong knowledge of cloud platforms (AWS, Azure, or GCP), networking, and systems administration. Proven experience managing CI/CD tools (e.g., Jenkins, GitLab, Terraform, Ansible). Proficient in project management tools (e.g., JIRA, MS Project, Asana). PMP / PRINCE2 / Agile certification is a plus. Exceptional communication, leadership, and stakeholder management skills. Ability to work under pressure and adapt to changing priorities. Location Preference: Candidates based in or willing to relocate to Chennai preferred.
Posted 5 days ago
0 years
0 Lacs
India
On-site
Role Overview: We are seeking a highly skilled Backend Developer with 5 + years of experience in backend development. The ideal candidate will have expertise in Java, Python and Amazon Web Services (AWS) . The candidate should have a strong understanding of system architecture, microservices, cloud computing, and best practices for scalable and maintainable applications . They will play a crucial role in designing and implementing complex backend systems while mentoring junior and mid-level developers. Key Responsibilities: Architect, develop, and maintain high-performance, scalable backend systems. Design and implement microservices and distributed systems . Optimize application performance, security, and scalability. Develop and maintain robust APIs, including RESTful APIs . Lead the integration of third-party services and cloud-based solutions. Drive best practices for software engineering, testing, and DevOps automation. Conduct code reviews, provide mentorship, and lead the backend development team. Collaborate with cross-functional teams to define and implement new features. Stay up to date with the latest industry trends, technologies, and best practices. Requirements: Strong expertise in Java and Python for backend development. Proficiency in database management using MySQL and MongoDB . Experience with Redis for caching and performance optimization. Strong understanding of microservices architecture and system design . Experience with containerization (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) . Deep understanding of API development, authentication mechanisms (OAuth, JWT), and security best practices . Expertise in CI/CD pipelines, DevOps automation, and infrastructure as code (Terraform, Ansible, etc.) . Strong problem-solving skills and the ability to optimize existing codebases. Experience with agile methodologies, Git, and project management tools like Jira . Desired Attributes: Proven leadership and mentoring experience. Ability to analyze and improve system architecture . Strong communication and collaboration skills. Passion for innovation and staying ahead in backend technology trends. Ability to work in a fast-paced environment with tight deadlines.
Posted 5 days ago
55.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description Must Have: Kubernetes, Terraform, Docker, Jenkins, Pipeline Automation, Good experience in AWS, , Good to Have : Experience in CICD pipeline creation Location: Hyderabad NP: Immediate Joiners Preferred Education: B.sc-IT/B.Tech/M.tech/MCA- Full time Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 5 days ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re looking for a Cloud Architect / Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Key Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Requirements 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. Strong hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred.
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France