Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 - 12.0 years
13 - 18 Lacs
Bengaluru
Work from Office
We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture
Posted 6 days ago
5.0 - 10.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Req ID: 306669 We are currently seeking a Lead Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Lead Data/Product Engineer to join our dynamic team. The ideal candidate will have a strong background in streaming services and AWS cloud technology, leading teams and directing engineering workloads. This is an opportunity to work on the core systems supporting multiple secondary teams, so a history in software engineering and interface design would be an advantage. Key Responsibilities Lead and direct a small team of engineers engaged in - Engineering reuseable assets for the later build of data products - Building foundational integrations with Kafka, Confluent Cloud and AWS - Integrating with a large number of upstream and downstream technologies - Providing best in class documentation for downstream teams to develop, test and run data products built using our tools - Testing our tooling, and providing a framework for downstream teams to test their utilisation of our products - Helping to deliver CI, CD and IaC for both our own tooling, and as templates for downstream teams Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 5+ years of experience in data engineering - 3+ years of experience with real time (or near real time) streaming systems - 2+ years of experience leading a team of data engineers - A willingness to independently learn a high number of new technologies and to lead a team in learning new technologies - Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway - Strong experience with Python - Strong experience in Kafka - Excellent understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts both directly and through documentation - Strong use of version control and proven ability to govern a team in the best practice use of version control - Strong understanding of Agile and proven ability to govern a team in the best practice use of Agile methodologies Preferred Skills and Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with terraform - Experience with CI pipelines - Ability to code in a JVM language - Understanding of GDPR and the correct handling of PII - Knowledge of technical interface design - Basic use of Docker
Posted 6 days ago
10.0 - 15.0 years
40 - 60 Lacs
Bengaluru
Hybrid
How will you make a difference? We are seeking a collaborative and highly motivated Principal AI Architect to lead our AI team, drive innovation, and enhance customer experiences through impactful artificial intelligence solutions. As a member of the Wabtec IT Data & Analytics (DnA) Team, you will be responsible for: Providing strategic leadership and direction in the development, articulation and implementation, of a comprehensive AI/ML/ Data Transformation Roadmap for Wabtec aligned with the overall business objectives. Working with other AI champions in Wabtec evaluating AI tools/ technologies/ frameworks, champion adoption in different projects and demonstrate value for business and customers. Actively collaborating with various stakeholders to align AI initiatives in Cloud Computing environments (e.g., AWS, Azure, OCI) with business goals. Providing technical oversight on AI projects to drive performance output to meet KPI metrics in Productivity and Quality. Serving as contact and interface with external partners and industry leaders for collaborations in AI/LLM/ Generative AI. Architecting and deploying scalable AI solutions that integrate seamlessly with existing business and IT infrastructure. Design and architect AI as a service, to enable collaboration btw multiple teams in delivering AI solutions Optimizing state-of-the-art algorithms in distributed environments Create clear and concise communications/recommendations for senior leadership review related to AI strategic business plans and initiatives. Staying abreast of advancements in AI, machine learning, and data science to continuously innovate and improve solutions and bring the external best practices for adoption in Wabtec Implementing best practices for AI designing, testing, deployment, and maintenance Diving deep into complex business problems and immerse yourself in Wabtec data & outcomes. Mentoring a team of data scientists, fostering growth and performance. Developing AI governance frameworks with ethical AI practices and ensuring compliance with data protection regulations and ensuring responsible AI development. What do we want to know about you? You must have: The minimum qualifications for this role include: Ph.D., M.S., or Bachelor's degree in Statistics, Machine Learning, Operations Research, Computer Science, Economics, or a related quantitative field 5+ years of experience developing and supporting AI products in a production environment with 12+ years of proven relevant experience 8+ years of experience managing and leading data science teams initiatives at enterprise level Profound knowledge of modern AI and Generative AI technologies Extensive experience in designing, implementing, and maintaining AI systems End-to-end expertise in AI/ML project lifecycle, from conception to large-scale production deployment Proven track record as an Architect with cloud computing environments (e.g., AWS, Azure, OCI) and distributed computing platforms, including containerized deployments using technologies such as Amazon EKS (Elastic Kubernetes Service) Expertise with Hands-On experience into Python, AWS AI tech-stack (Bedrock Services, Foundation models, Textract, Kendra, Knowledge Bases, Guard rails, Agents etc.), ML Flow, Image Processing, NLP/Deep Learning, PyTorch /TensorFlow, LLMs integration with applications. Preferred qualifications for this role include: Proven track record in building and leading high-performance AI teams, with expertise in hiring, coaching, and developing engineering leaders, data scientists, and ML engineers Demonstrated ability to align team vision with strategic business goals, driving impactful outcomes across complex product suites for diverse, global customers Strong stakeholder management skills, adept at influencing and unifying cross-functional teams to achieve successful project outcomes Extensive hands-on experience with enterprise-level Python development, PyData stack, Big Data technologies, and machine learning model deployment at scale Proficiency in cutting-edge AI technologies, including generative AI, open-source frameworks, and third-party solutions (e.g., OpenAI) Mastery of data science infrastructure and tools, including code versioning (Git), containerization (Docker), and modern AI/ML tech stacks Preferred: AWS with AWS AI services. We would love it if you had: Fluent with experimental design and the ability to identify, compute and validate the appropriate metrics to measure success Demonstrated success working in a highly collaborative technical environment (e.g., code sharing, using revision control, contributing to team discussions/workshops, and collaborative documentation) Passion and aptitude for turning complex business problems into concrete hypotheses that can be answered through rigorous data analysis and experimentation Deep expertise in analytical storytelling and stellar communications skills Demonstrated success mentoring junior teammates & helping develop peers What will your typical day look like? Stakeholder Engagement: Collaborate with our Internal stakeholders to understand their needs, update on a specific project progress, and align our AI initiatives with business goals. Use Generative AI and machine learning techniques and build LLM Models & fine-tuning, Image processing, NLP, model integration with new/existing applications, and improve model performance/accuracy along with cost effective solutions. Support AI Team: Guide and mentor the AI team, resolving technical issues and provide suggestions. Reporting & Strategy: Generate and present reports to senior leadership, develop strategic insights, and stay updated on industry trends. Building AI roadmap for Wabtec and discussion with senior leadership Training, Development & Compliance: Organize training sessions, manage resources efficiently, ensure data accuracy, security, and compliance with best practices.
Posted 1 week ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
About The Role We are seeking a skilled Infrastructure Support Engineer to join our team. The ideal candidate will have a strong background in managing and supporting on-premises infrastructure, specifically ESXi and VxRail systems, as well as experience with AWS cloud environments. This role requires a proactive approach to system monitoring, troubleshooting, and maintenance, ensuring optimal performance and reliability of our infrastructure. Key Responsibilities: On-Premises Support Manage and support VMware ESXi environments using vSphere, including installation, configuration, and troubleshooting. Install VMware ESXi hypervisors on physical servers. Configure networking, storage, and resource pools for optimal performance. Set up and manage vCenter Server for centralized management of ESXi hosts. Diagnose and resolve issues related to ESXi host performance, connectivity, and VM operation. Use VMware logs and diagnostic tools to identify problems and implement corrective actions. Perform regular health checks and maintenance to ensure optimal performance. Set up monitoring tools to track performance metrics of VMs and hosts, including CPU, memory, disk I/O, and network usage. Identify bottlenecks and inefficiencies in the infrastructure and take corrective action. Generate reports on system performance for management review. Design and implement backup strategies using VMware vSphere Data Protection or third-party solutions (e.g., Veeam, Commvault). Schedule regular backups of VMs and critical data to ensure data integrity and recoverability. Test backup and restoration processes periodically to verify effectiveness. Will be involved in L1 support on rotation Primary Skills AWS Support: Assist in the design, deployment, and management of AWS infrastructure. Monitor AWS resources, ensuring performance, cost-efficiency, and compliance with best practices. Troubleshoot issues related to AWS services (EC2, S3, RDS, etc.) and provide solutions. Collaborate with development teams to support application deployments in AWS environments. Qualifications: - Bachelor's degree in Computer Science or related field. - 5+ years of experience in infrastructure support, specializing in VMware (ESXi), vSphere, and VxRail. - Proven expertise in Linux administration. - Proficient in memory, disk, and CPU monitoring and management. - In-depth understanding of SAN, NFS, NAS. - Thorough knowledge of AWS services (Security/IAM) and architecture. - Skilled in scripting and automation tools (PowerShell, Python, AWS CLI). - Hands-on experience with containerization concepts. - Kubernetes, AWS EKS experience required. - Familiarity with networking concepts, security protocols, and best practices. - Windows administration preferred - Redhat VM / Nutanix Virtualization preferred - Strong problem-solving abilities and ability to work under pressure. - Excellent communication skills and collaborative mindset.
Posted 1 week ago
5.0 - 8.0 years
7 - 11 Lacs
Pune
Work from Office
About The Role Role Purpose The purpose of this role is to provide significant technical expertise in architecture planning and design of the concerned tower (platform, database, middleware, backup etc) as well as managing its day-to-day operations ? Do Provide adequate support in architecture planning, migration & installation for new projects in own tower (platform/dbase/ middleware/ backup) Lead the structural/ architectural design of a platform/ middleware/ database/ back up etc. according to various system requirements to ensure a highly scalable and extensible solution Conduct technology capacity planning by reviewing the current and future requirements Utilize and leverage the new features of all underlying technologies to ensure smooth functioning of the installed databases and applications/ platforms, as applicable Strategize & implement disaster recovery plans and create and implement backup and recovery plans Manage the day-to-day operations of the tower Manage day-to-day operations by troubleshooting any issues, conducting root cause analysis (RCA) and developing fixes to avoid similar issues. Plan for and manage upgradations, migration, maintenance, backup, installation and configuration functions for own tower Review the technical performance of own tower and deploy ways to improve efficiency, fine tune performance and reduce performance challenges Develop shift roster for the team to ensure no disruption in the tower Create and update SOPs, Data Responsibility Matrices, operations manuals, daily test plans, data architecture guidance etc. Provide weekly status reports to the client leadership team, internal stakeholders on database activities w.r.t. progress, updates, status, and next steps Leverage technology to develop Service Improvement Plan (SIP) through automation and other initiatives for higher efficiency and effectiveness ? Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipro’s standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation ? Deliver NoPerformance ParameterMeasure1Operations of the towerSLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management2New projectsTimely delivery Avoid unauthorised changes No formal escalations ? Mandatory Skills: AWS EKS Admin. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
5.0 - 10.0 years
9 - 13 Lacs
Bharuch
Work from Office
About The Role : Role Purpose Required Skills: 5+Years of experience in system administration, application development, infrastructure development or related areas 5+ years of experience with programming in languages like Javascript, Python, PHP, Go, Java or Ruby 3+ years of in reading, understanding and writing code in the same 3+years Mastery of infrastructure automation technologies (like Terraform, Code Deploy, Puppet, Ansible, Chef) 3+years expertise in container/container-fleet-orchestration technologies (like Kubernetes, Openshift, AKS, EKS, Docker, Vagrant, etcd, zookeeper) 5+ years Cloud and container native Linux administration /build/ management skills Key Responsibilities: Hands-on design, analysis, development and troubleshooting of highly-distributed large-scale production systems and event-driven, cloud-based services Primarily Linux Administration, managing a fleet of Linux and Windows VMs as part of the application solutions Involved in Pull Requests for site reliability goals Advocate IaC (Infrastructure as Code) and CaC (Configuration as Code) practices within Honeywell HCE Ownership of reliability, up time, system security, cost, operations, capacity and performance-analysisMonitor and report on service level objectives for a given applications services. Work with the business, Technology teams and product owners to establish key service level indicators. Ensuring the repeatability, traceability, and transparency of our infrastructure automation Support on-call rotations for operational duties that have not been addressed with automation Support healthy software development practices, including complying with the chosen software development methodology (Agile, or alternatives), building standards for code reviews, work packaging, etc. Create and maintain monitoring technologies and processes that improve the visibility to our applications'' performance and business metrics and keep operational workload in-check. Partnering with security engineers and developing plans and automation to aggressively and safely respond to new risks and vulnerabilities. Develop, communicate, collaborate, and monitor standard processes to promote the long-term health and sustainability of operational development tasks.
Posted 1 week ago
8.0 - 12.0 years
10 - 14 Lacs
Gurugram
Work from Office
About The Role : AWS Cloud Engineer Required Skills and Qualifications: 4-7 years of hands-on experience with AWS services, including EC2, S3, Lambda, ECS, EKS, and RDS/DynamoDB, API Gateway. Strong working knowledge of Python, JavaScript. Strong experience with Terraform for infrastructure as code. Expertise in defining and managing IAM roles, policies, and configurations . Experience with networking, security, and monitoring within AWS environments. Experience with containerization technologies such as Docker and orchestration tools like Kubernetes (EKS) . Strong analytical, troubleshooting, and problem-solving skills. Experience with AI/ML technologies and Services like Textract will be preferred. AWS Certifications ( AWS Developer, Machine Learning - Specialty ) are a plus. Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback2Self- ManagementProductivity, efficiency, absenteeism, Training Hours, No of technical training completed
Posted 1 week ago
0.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant, Automation Test Lead! Responsibilities . Understand the need of the requirement beyond its face value, design a proper machine-executable automation solution using Python scripts. . you will be getting the requirement of Business Rules or Automation test scenario from business or QA team to Automate using Python and SQL, you will not be responsible for writing test case. . Implement the re-useable solution following best practice, and delivery the automation results on time. . Maintaining, troubleshooting, and optimise existing solution . Collaborate with various disciplinary teams to align automation solution to boarder engineering community. . Documentation. . Lead, coordinate and guide the ETL Manual and automation testers. You may get a change to learn new technologies as well on cloud. Tech Stack (as of now) 1. Redshift 2. Aurora (postgresql) 3. S3 object storage 4. EKS / ECR 5. SQS/SNS 6. Roles/Policies 7. Argo 8. Robot Framework 9. Nested JSON Qualifications we seek in you! Minimum Qualifications 1. Python scripting. Candidate should be strong on python programming design / Pandas / processes / http requests like protocols 2. SQL technologies. (best in postgresql ) : OLTP/ OLAP / Join/Group/aggregation/windows functions etc. 3. Windows / Linux Operation systems basic command knowledge 4. Git usage. understand version control systems, concepts like git branch/pull request/ commit / rebase/ merge 6. SQL Optimization knowledge is plus 7. Good understand and experience in data structure related work. Preferred Qualifications Good to Have as Python code to be deploy using these framework 1. Docker is a plus. understanding about the images/container concepts. 2. Kubernetes is a plus. understanding the concepts and theory of the k8s, especially pods / env etc. 3. Argo workflow / airflow is a plus. 4. Robot Framework is a plus. 5. Kafka is a plus. understand the concept for kafka, and event driven method. Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
6.0 - 10.0 years
13 - 17 Lacs
Mumbai, Pune
Work from Office
Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office]
Posted 1 week ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
You as a DevOps engineer share your expertise in implementing, maintaining automated build, deployment pipelines, optimizing build times and resource usage. You will contribute in CI/CD methodologies and Git branching strategies. You have: Graduate or Postgraduate in Engineering with 4+ years of experience in DevOps and CICD pipelines. Experience in Docker, Kubernetes (EKS), OpenShift. Software development experience using Python / Groovy / Shell. Experience in designing and implementing CI/CD pipelines. Experience working with Git technology and understanding of Git branching strategies. It would be nice if you also have: Knowledge to AI/ML algorithms. Knowledge inYocto, Jenkins, Gerrit, distCC and ZUUL. You will leverage experience in Yocto, Jenkins, Gerrit, and other build tools to streamline and optimize the build process. You will proactively monitor build pipelines, investigate failures, and implement solutions to improve reliability and efficiency. You will utilize AI/ML algorithms to automate and optimize data-driven pipelines, improving data processing and analysis. You willwork closely with the team to understand their needs and contribute to a collaborative and efficient work environment. Actively participate in knowledge sharing sessions and contribute to the team's overall understanding of best practices and innovative solutions. You will learn a culture of continuous improvement, constantly seeking ways to optimize processes and enhance the overall effectiveness of the team.
Posted 1 week ago
7.0 - 12.0 years
20 - 35 Lacs
Pune
Remote
8+ yrs of exp in SRE or related roles. Design, implement, maintain scalable , reliable infra on AWS. Utilize Dynatrace for monitoring, performance tuning, and troubleshooting of applications and services. AWS Ecosystem – EKS, EC2, DynamoDB, Lambda
Posted 1 week ago
4.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Experience in Modernizing applications to Container based platform using EKS, ECS, Fargate Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux , JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle
Posted 1 week ago
4.0 - 8.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Experience in Modernizing applications to Container based platform using EKS, ECS, Fargate Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux , JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Mumbai
Work from Office
Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc). Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins). Experience with configuration management tools (e.g. Ansible, Chef) . Container Registry Solutions (Harbor, JFrog, Quay etc) . Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios, ELK. Mandatory Skills: Hands on Exp on Kubernetes and Kubernete Networking.
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: 3+ years of working experience in data engineering. Hands-on keyboard' AWS implementation experience across a broad range of AWS services. Must have in depth AWS development experience (Containerization - Docker, Amazon EKS, Lambda, EC2, S3, Amazon DocumentDB, PostgreSQL) Strong knowledge of DevOps and CI/CD pipeline (GitHub, Jenkins, Artifactory) Scripting capability and the ability to develop AWS environments as code Hands-on AWS experience with at least 1 implementation (preferred in an Enterprise scale environment) Experience with core AWS platform architecture, including areas such asOrganizations, Account Design, VPC, Subnet, segmentation strategies. Backup and Disaster Recovery approach and design Environment and application automation CloudFormation and third-party automation approach/strategy Network connectivity, Direct Connect and VPN AWS Cost Management and Optimization Skilled experience in Python libraries (NumPy, Pandas dataframe)
Posted 1 week ago
8.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
As a L2 Cloud Engineer in Acqueon you will need. Ensure the highest uptime for customers in our SaaS environment Provision Customer Tenants & Manage SaaS Platform, Memos to the Staging and Production Environments Infrastructure Management: Design, deploy, and maintain secure and scalable AWS cloud infrastructure using services like EC2, S3, RDS, Lambda, and CloudFormation. Monitoring & Incident Response: Set up monitoring solutions (e.g., CloudWatch, Grafana) to detect, respond, and resolve issues quickly, ensuring uptime and reliability. Cost Optimization: Continuously monitor cloud usage and implement cost-saving strategies such as Reserved Instances, Spot Instances, and resource rightsizing. Backup & Recovery: Implement robust backup and disaster recovery solutions using AWS tools like AWS Backup, S3, and RDS snapshots. Security Compliance: Configure security best practices, including IAM policies, security groups, and encryption, while adhering to organizational compliance standards. Infrastructure as Code (IaC): Use Terraform, CloudFormation, or AWS CDK to provision, update, and manage infrastructure in a consistent and repeatable manner. Automation & Configuration Management: Automate manual processes and system configurations using Ansible, Python, or shell scripting. Containerization & Orchestration: Manage containerized applications using Docker and Kubernetes (EKS) for scaling and efficient deployment. Skills & Qualifications: Experience: 5+ years of experience in CloudOps roles with a strong focus on AWS. Proficient in AWS services, including EC2, S3, RDS, Lambda, IAM, CloudFront, and VPC. Hands-on experience with Terraform, CloudFormation, or other IaC tools. Revenue Execution Platform Strong knowledge of CI/CD pipelines (e.g., AWS CodePipeline, Jenkins, GitHub Actions). Experience with container technologies like Docker and orchestration tools like Kubernetes (EKS). Scripting knowledge (e.g., Python, Bash, PowerShell) for automation and tooling. Monitoring & Logging: Experience with monitoring tools like AWS CloudWatch, ELK stack, Prometheus, or Grafana. Security: Strong understanding of cloud security principles, including IAM, encryption, and AWS security tools (e.g., AWS WAF, GuardDuty). Collaboration Tools: Familiarity with tools like Git, Jira, and Confluence. Good Knowledge on Windows Servers, Linux & able to troubleshoot critical issues
Posted 1 week ago
5.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Design, implement, and maintain backend microservices and integrations focusing on scalability, security, and performance. Collaborate with engineers and consultants to integrate SaaS (CRM, ERP) with inhouse applications. Assist in architecture and code reviews, promoting best practices. Create and maintain technical documentation (eg architecture, ER/UML diagrams). Work with cross-functional teams. Requirements 6+ years in backend or full-stack development, focusing on large-scale, enterprise-grade applications. Expertise in TypeScript, JavaScript (Node.js), React, and cloud platforms (AWS). Experience with serverless, distributed architectures, data modeling (Relational NoSQL), containerization (EKS), and CI/CD. Knowledge of SaaS integrations and iPaaS solutions is a plus.
Posted 1 week ago
6.0 - 12.0 years
11 - 16 Lacs
Noida, Bhubaneswar, Pune
Work from Office
6-12 years of professional experience in DevOps with a focus on AWS cloud technologies. Certification in any cloud provider is a plus. Experience on EC2, VPC, S3, RDS, EBS, IAM, Lambda, CDN, EKS, ELB, ALB, Cloud Formation. Proficiency in scripting languages such as Python, shell, or Bash, with the ability to write clean, maintainable, and efficient code. Experience with CI/CD tools, preferably GitLab CI/CD/GitHub Actions/Jenkins and knowledge of best practices in building and deploying applications using CI/CD pipelines. Solid understanding of infrastructure automation tools, such as Ansible, Terraform, or similar, and hands-on experience in deploying and managing infrastructure as code. Familiarity with containerization technologies and orchestration frameworks, such as Docker and Kubernetes, for efficient application deployment and scaling.
Posted 1 week ago
8.0 - 13.0 years
9 - 14 Lacs
Bengaluru
Work from Office
8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.
Posted 1 week ago
5.0 - 8.0 years
12 - 19 Lacs
Pune
Remote
5 + yrs of exp as a Performance Tester on AWS services. Exp on AWS services - EKS, Lambda, EC2, RDS. Exp in SRE (preferred) Exp with performance testing tools -JMeter, Gatling, LoadRunner Exp CI/CD tools Jenkins, GitLab CI & Agile.
Posted 1 week ago
8.0 - 13.0 years
25 - 40 Lacs
Pune
Remote
10+ yrs of exp in S/W dev with a focus on AWS solutions architecture. Exp in architecting microservices-based applications using EKS.Design, develop, and implement microservices apps on AWS using Java. AWS Certified Solutions Architect -must
Posted 1 week ago
8.0 - 10.0 years
12 - 14 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE At Amgen, we believe that innovation can and should be happening across the entire company. Part of the Artificial Intelligence & Data function of the Amgen Technology and Medical Organizations (ATMOS), the AI & Data Innovation Lab (the Lab) is a center for exploration and innovation, focused on integrating and accelerating new technologies and methods that deliver measurable value and competitive advantage. Weve built algorithms that predict bone fractures in patients who havent even been diagnosed with osteoporosis yet. Weve built software to help us select clinical trial sites so we can get medicines to patients faster. Weve built AI capabilities to standardize and accelerate the authoring of regulatory documents so we can shorten the drug approval cycle. And thats just a part of the beginning. Join us! We are seeking a Senior DevOps Software Engineer to join the Labs software engineering practice. This role is integral to developing top-tier talent, setting engineering best practices, and evangelizing full-stack development capabilities across the organization. The Senior DevOps Software Engineer will design and implement deployment strategies for AI systems using the AWS stack, ensuring high availability, performance, and scalability of applications. Roles & Responsibilities: Design and implement deployment strategies using the AWS stack, including EKS, ECS, Lambda, SageMaker, and DynamoDB. Configure and manage CI/CD pipelines in GitLab to streamline the deployment process. Develop, deploy, and manage scalable applications on AWS, ensuring they meet high standards for availability and performance. Implement infrastructure-as-code (IaC) to provision and manage cloud resources consistently and reproducibly. Collaborate with AI product design and development teams to ensure seamless integration of AI models into the infrastructure. Monitor and optimize the performance of deployed AI systems, addressing any issues related to scaling, availability, and performance. Lead and develop standards, processes, and best practices for the team across the AI system deployment lifecycle. Stay updated on emerging technologies and best practices in AI infrastructure and AWS services to continuously improve deployment strategies. Familiarity with AI concepts such as traditional AI, generative AI, and agentic AI, with the ability to learn and adopt new skills quickly. Functional Skills: Deep expertise in designing and maintaining CI/CD pipelines and enabling software engineering best practices and overall software product development lifecycle. Ability to implement automated testing, build, deployment, and rollback strategies. Advanced proficiency managing and deploying infrastructure with the AWS cloud platform, including cost planning, tracking and optimization. Proficiency with backend languages and frameworks (Python, FastAPI, Flask preferred). Experience with databases (Postgres/DynamoDB) Experience with microservices architecture and containerization (Docker, Kubernetes). Good-to-Have Skills: Familiarity with enterprise software systems in life sciences or healthcare domains. Familiarity with big data platforms and experience in data pipeline development (Databricks, Spark). Knowledge of data security, privacy regulations, and scalable software solutions. Soft Skills: Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications: Bachelors degree in Computer Science, AI, Software Engineering, or related field. 8+ years of experience in full-stack software engineering.
Posted 1 week ago
3.0 - 5.0 years
3 - 7 Lacs
Gurugram
Work from Office
About the Opportunity Job TypeApplication 23 June 2025 Title Expert Engineer Department GPS Technology Location Gurugram, India Reports To Project Manager Level Grade 4 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our [insert name of team/ business area] team and feel like your part of something bigger. About your team The Technology function provides IT services to the Fidelity International business, globally. These include the development and support of business applications that underpin our revenue, operational, compliance, finance, legal, customer service and marketing functions. The broader technology organisation incorporates Infrastructure services that the firm relies on to operate on a day-to-day basis including data centre, networks, proximity services, security, voice, incident management and remediation. About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools like Nifi, Matallion / DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment / evaluate / adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team.
Posted 1 week ago
5.0 - 10.0 years
4 - 8 Lacs
Bengaluru
Work from Office
We are looking for an experienced Senior BT Reliability Engineer to join our Business Technology team to maintain and continually improve our cloud-based services. The Site Reliability Engineering team in Bangalore is brand new, and builds foundational back-end infrastructure services and tooling for Okta s corporate teams. We enable teams to build infrastructure at scale and automate their software reliably and predictably. SREs are team players and innovators who build and operate technology using best practices and an agile mindset. We are looking for a smart, innovative, and passionate engineer for this role, someone who has a passion for designing complex and implementing cloud-based infrastructure. This is a new team, and the ideal candidate welcomes the challenge of building something new. They enjoy seeing their designs run at scale with automation, testing, and an excellent operational mindset. If you exemplify the ethics of, "If you have to do something more than once, automate it," we want to hear from you! Responsibilities Build and run development tools, pipelines, and infrastructure with a security-first mindset Actively participate in Agile ceremonies, write stories, and support team members through demos, knowledge sharing, and architecture sessions Promote and apply best practices for building secure, scalable, and reliable cloud infrastructure Develop and maintain technical documentation, network diagrams, runbooks, and procedures Designing, building, running, and monitoring Okta's IT infrastructure and cloud services Driving initiatives to evolve our current cloud platforms to increase efficiency and keep it in line with current security standards and best practices Recommend, develop, implement, and manage appropriate policy, standards, processes, and procedural updates Working with software engineers to ensure that development follows established processes and works as intended Create and maintain centralized technical processes, including container and image management Provide excellent customer service to our internal users and be an advocate for SRE services and DevOps practices Qualifications 5+ years of experience as a SRE, DevOps, Systems Engineer, or equivalent Demonstrated ability to develop complex applications for cloud infrastructure at scale and deliver projects on schedule and within budget Proficient in managing AWS multi-account environments and AWS authentication, governance, and using org management suite, including, but not limited to, AWS Orgs, AWS IAM, AWS Identity Center, and Stacksets Proficient with automating systems and infrastructure via Terraform Proficient in developing applications running on AWS or other cloud infrastructure resources, including compute, storage, networking, and virtualization Proficient with Git and building deployment pipeline using commercial tools, especially Github Actions Proficient with developing tooling and automation using Python Proficient with AWS container based workloads and concepts, especially EKS, ECS, and ECR. Experience with monitoring tools, especially Splunk, Cloudwatch, and Grafana Experience with reliability engineering concepts and security best practices on public cloud platforms Experience with image creation and management, especially for container and EC2 based workloads Knowledgeable with Linux system administration skills Familiar with configuration management tools, such as Ansible and SSM Familiar with Github Actions Runner Controller self-hosted runners Good communication skills, with the ability to influence others and communicate complex technical concepts to different audiences
Posted 1 week ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
We are looking for an experienced Staff BT Site Reliability Engineer to join our Business Technology team to build, improve, and maintain our cloud platform services. The Site Reliability Engineering team builds foundational back-end infrastructure services and tooling for Okta s corporate teams. We enable teams to build infrastructure at scale and automate their software reliably and predictably. SREs are team players and innovators who build and operate technology using best practices and an agile mindset. We are looking for a smart, innovative, and passionate engineer for this role, someone who is interested in designing and implementing complex cloud-based infrastructure. This is a lean and agile team, and the ideal candidate welcomes the challenge of building in a dynamic and ever changing environment. They enjoy seeing their designs run at scale with automation, testing, and an excellent operational mindset. If you exemplify the ethics of, "If you have to do something more than once, automate it," we want to hear from you! Responsibilities Build and run development tools, pipelines, and infrastructure with a security-first mindset Actively participate in Agile ceremonies, write stories, and support team members through demos, knowledge sharing, and architecture sessions Promote and apply best practices for building secure, scalable, and reliable cloud infrastructure Develop and maintain technical documentation, network diagrams, runbooks, and procedures Designing, building, running, and monitoring Okta's IT infrastructure and cloud services Driving initiatives to evolve our current cloud platforms to increase efficiency and keep it in line with current security standards and best practices Recommend, develop, implement, and manage appropriate policy, standards, processes, and procedural updates Working with software engineers to ensure that development follows established processes and works as intended Create and maintain centralized technical processes, including container and image management Provide excellent customer service to our internal users and be an advocate for SRE services and DevOps practices Qualifications 10+ years of experience as a SRE, DevOps, Systems Engineer, or equivalent Demonstrated ability to develop complex applications for cloud infrastructure at scale and deliver projects on schedule and within budget Proficient in managing AWS multi-account environments and AWS authentication, governance, and using org management suite, including, but not limited to, AWS Orgs, AWS IAM, AWS Identity Center, and Stacksets Proficient with automating systems and infrastructure via Terraform Proficient in developing applications running on AWS or other cloud infrastructure resources, including compute, storage, networking, and virtualization Proficient with Git and building deployment pipeline using commercial tools, especially Github Actions Proficient with developing tooling and automation using Python Proficient with AWS container based workloads and concepts, especially EKS, ECS, and ECR. Experience with monitoring tools, especially Splunk, Cloudwatch, and Grafana Experience with reliability engineering concepts and security best practices on public cloud platforms Experience with image creation and management, especially for container and EC2 based workloads Experience with Github Actions Runner Controller self-hosted runners Knowledgeable with Linux system administration skills Knowledgeable of configuration management tools, such as Ansible and SSM Good communication skills, with the ability to influence others and communicate complex technical concepts to different audiences
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2