Home
Jobs
Companies
Resume

366 Eks Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

12 - 22 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 15 hours ago

Apply

5.0 - 8.0 years

8 - 13 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Naukri logo

Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery Automate cloud infrastructure using Terraform Write unit tests, integration tests and performance tests Work in a team environment using agile practices Monitor and optimize application performance and infrastructure costs Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Work closely with cross-functional teams to ensure seamless integration and operation of services Proficiency JavaScript for full-stack development Strong experience with AWS cloud services, including EKS, Lambda, and S3 Knowledge of Docker containers and orchestration tools including Kubernetes

Posted 1 day ago

Apply

4.0 - 7.0 years

9 - 12 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 3 days ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are seeking a skilled and experienced Developer with expertise in .net programming along with knowledge on LLM and AI to join our dynamic team. As a Contact Center Developer, you will be responsible for developing and maintaining contact center applications, with a specific focus on AI functionality. Your role will involve designing and implementing robust and scalable AI solutions, ensuring efficient agent experience. You will collaborate closely with cross-functional teams, including software developers, system architects, and managers, to deliver cutting-edge solutions that enhance our contact center experience. How will you make an impact? Develop, enhance, and maintain contact center applications with an emphasis on copilot functionality. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform system analysis, troubleshooting, and debugging to identify and resolve issues. Conduct regular performance monitoring and optimization of code to ensure optimal customer experiences. Maintain documentation, including technical specifications, system designs, and user manuals. Stay up to date with industry trends and emerging technologies in contact center, AI, LLM and .Net development and apply them to enhance our systems. Participate in code reviews and provide constructive feedback to ensure high-quality code standards. Deliver high quality, sustainable, maintainable code. Participate in reviewing design and code (pull requests) for other team members – again with a secure code focus. Work as a member of an agile team responsible for product development and delivery Adhere to agile development principles while following and improving all aspects of the scrum process. Follow established department procedures, policies, and processes. Adheres to the company Code of Ethics and CXone policies and procedures. Excellent English and experience in working in international teams are required. Have you got what it takes? BS or MS in Computer Science or related degree 5-8 years’ experience in software development. Strong knowledge of working and developing Microservices. Design, develop, and maintain scalable .NET applications specifically tailored for contact center copilot solutions using LLM technologies. Good understanding of .Net and design patterns and experience in implementing the same Experience in developing with REST API Integrate various components including LLM tools, APIs, and third-party services within the .NET framework to enhance functionality and performance. Implement efficient database structures and queries (SQL/NoSQL) to support high-volume data processing and real-time decision-making capabilities. Utilize Redis for caching frequently accessed data and optimizing query performance, ensuring scalable and responsive application behavior. Identify and resolve performance bottlenecks through code refactoring, query optimization, and system architecture improvements. Conduct thorough unit testing and debugging of applications to ensure reliability, scalability, and compliance with specified requirements. Utilize Git or similar version control systems to manage source code and coordinate with team members on collaborative projects. Experience with Docker/Kubernetes is a must. Experience with cloud service provider - Amazon Web Services (AWS) is must. Experience with AWS Could on any technology (preferred are Kafka, EKS, Kubernetes) Experience with Continuous Integration workflow and tooling. Stay updated with industry trends, emerging technologies, and best practices in .NET development and LLM applications to drive innovation and efficiency within the team. You will have an advantage if you also have: Strong communication skills Experience with cloud service provider like Amazon Web Services (AWS), Google Cloud Engine, Azure or equivalent Cloud provider is a must. Experience with ReactJS. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7443 Reporting into: Sandip Bhattcharjee Role Type: Individual Contributor

Posted 3 days ago

Apply

2.0 - 4.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Description Enphase Energy is a global energy technology company and leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, Enphase transformed the solar industry with our revolutionary microinverter technology, which turns sunlight into a safe, reliable, resilient, and scalable source of energy to power our lives. Today, the Enphase Energy System helps people make, use, save, and sell their own power. Enphase is also one of the fastest growing and innovative clean energy companies in the world, with approximately 68 million products installed across more than 145 countries. We are building teams that are designing, developing, and manufacturing next-generation energy technologies and our work environment is fast-paced, fun and full of exciting new projects. If you are passionate about advancing a more sustainable future, this is the perfect time to join Enphase! About the role At Enphase, we think big. We re on a mission to bring solar energy to the next level, one where it s ready to meet the energy demands of an entire globe. As we work towards our vision for a solar-powered planet, we need visionary and talented people to join our team as Senior Back-End engineers. The Back-End engineer will develop, maintain, architect expand cloud microservices for the EV (Electric Vehicle) Business team. Codebase uses Java, Spring Boot, Mongo, REST APIs, MySQL. Applications are dockized and hosted in AWS using a plethora of AWS services. What you will be doing Programming in Java + Spring Boot REST API with JSON, XML etc. for data transfer Multiple database proficiency including SQL and NoSQL (Cassandra, MongoDB) Ability to develop both internal facing and external facing APIs using JWT and OAuth2.0 Familiar with HA/DR, scalability, performance, code optimizations Experience with working with highly performance and throughput systems Ability to define, track and deliver items to one s own schedule. Good organizational skills and the ability to work on more than one project at a time. Exceptional attention to detail and good communication skills Who you are and what you bring B.E/B.Tech in Computer Science from top tier college and >70% marks More than 4 years of overall Back-End development experience Experience with SQL + NoSQL (Preferably MongoDB) Experience with Amazon Web Services, JIRA, Confluence, GIT, Bitbucket etc. Ability to work independently and as part of a project team. Strong organizational skills, proactive, and accountable Excellent critical thinking and analytical problem-solving skills Ability to establish priorities and proceed with objectives without supervision. Ability to communicate effectively and accurately. clear concise written project status update throughout the project lifecycle Highly skilled at facilitating and documenting requirements Excellent facilitation, collaboration, and presentation skills Comfort with ambiguity, frequent change, or unpredictability Good Practice of writing clean and scalable code Exposure or knowledge in Renewable Tech companies Good understanding of cloud technologies, such as Docker, Kubernetes, EKS, Kafka, AWS Kinesis etc. Knowledge of NoSQL Database systems like MongoDB or CouchDB, including Graph Databases Ability to work in a fast-paced environment. Exposure or knowledge in Renewable Tech companies

Posted 4 days ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Thane

Work from Office

Naukri logo

Role & responsibilities : Deploy, configure, and manage infrastructure across cloud platforms like AWS, Azure, and GCP. Automate provisioning and configuration using tools such as Terraform. Design and maintain CI/CD pipelines using Jenkins, GitLab CI, or CircleCI to streamline deployments. Build, manage, and deploy containerized applications using Docker and Kubernetes. Set up and manage monitoring systems like Prometheus and Grafana to ensure performance and reliability. Write scripts in Bash or Python to automate routine tasks and improve system efficiency. Collaborate with development and operations teams to support deployments and troubleshoot issues. Investigate and resolve technical incidents, performing root cause analysis and implementing fixes. Apply security best practices across infrastructure and deployment workflows. Maintain documentation for systems, configurations, and processes to support team collaboration. Continuously explore and adopt new tools and practices to improve DevOps workflows.

Posted 6 days ago

Apply

8.0 - 12.0 years

30 - 40 Lacs

Pune

Work from Office

Naukri logo

Assessment & Analysis Review CAST software intelligence reports to identify technical debt, architectural flaws, and cloud readiness. Conduct manual assessments of applications to validate findings and prioritize migration efforts. Identify refactoring needs (e.g., monolithic to microservices, serverless adoption). Evaluate legacy systems (e.g., .NET Framework, Java EE) for compatibility with AWS services. Solution Design Develop migration strategies (rehost, replatform, refactor, retire) for each application. Architect AWS-native solutions using services like EC2, Lambda, RDS, S3, and EKS. Design modernization plans for legacy systems (e.g., .NET Framework .NET Core, Java EE Spring Boot). Ensure compliance with AWS Well-Architected Framework (security, reliability, performance, cost optimization). Collaboration & Leadership Work with cross-functional teams (developers, DevOps, security) to validate designs. Partner with clients to align technical solutions with business objectives. Mentor junior architects and engineers on AWS best practices. Roles and Responsibilities Job Title: Senior Solution Architect - Cloud Migration & Modernization (AWS) Location: [Insert Location] Department: Digital Services Reports To: Cloud SL

Posted 6 days ago

Apply

6.0 - 8.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Were looking for a skilled and motivated Technical Specialist with expertise in cloud technologies, security best practices, and DevOps methodologies to help shape and lead impactful initiatives across our global platforms. You will play a pivotal part in designing and implementing cutting-edge solutions, working closely with cross-functional teams worldwide. Youll be at the forefront of driving automation, enhancing system reliability, and delivering meaningful results in a fast-paced, agile environment. You have: BE / Master's Degree in Computer Science or related technical discipline, or equivalent practical experience with 6-8 years of experience in software design, development, and testing Working experience with public or private cloud environments, including any of the following platformsAmazon AWS EKS, Red Hat OpenShift, Google GCP GKE, Microsoft Azure AKS, VMware Tanzu, or open-source Kubernetes Strong Python development skills, with experience in DevOps practices, working in a Jenkins-based environment, and familiarity with test frameworks like Radish and Cucumber Experience with container technologies (Docker or Podman) and Helm charts Expertise in container management environments (e.g., Kubernetes, service mesh, IAM, FPM) It would be nice if you also had: Experience in functional and system testing, software validation/reviews, and providing technical support during platform deployment and product integration Knowledge in configuring and managing security vulnerability scans, including container vulnerability scanning (e.g., Anchor), port scanning (e.g., Tenable), and malware scanning (e.g., Symantec Endpoint Protection) Experience in researching solutions to security vulnerabilities and applying hands-on mitigation strategies Lead & perform development activities of medium/high complexity features. Architect, design, develop, and test scalable software solutions Own and lead feature development and contribute to process improvements Collaborate with peers to resolve technical issues and review design specs Build and automate tests using frameworks like Radish, Cucumber, etc.

Posted 6 days ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Title : AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) Req ID: 325686 We are currently seeking a AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) to join our team in Bangalore, Karntaka (IN-KA), India (IN). Minimum Experience on Key Skills - 5 to 10 years Skills: AWS, SQL, Snowflake, ControlM, ServiceNow - Operational Engineer (Weekend on call) We looking for operational engineer who is ready to work on weekends for oncall as primary criteria. Skills we look for AWS cloud (SQS, SNS, , DynomoDB, EKS), SQL (postgress, cassendra), snowflake, ControlM/Autosys/Airflow, ServiceNow, Datadog, Splunk, Grafana, python/shell scripting.

Posted 6 days ago

Apply

1.0 - 6.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 328302 We are currently seeking a AWS Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Title: Digital Engineering Sr Associate NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Basic Qualifications 1 years' experience in AWS Infra Preferred Experience Excellent communication and collaboration skills. AWS certifications are preferred. Expertise in AWS cloud EC2, creating, Managing, Patching, trouble shooting. Good Knowledge on Access and Identity Management Monitoring Tools - CloudWatch (New Relic/other monitoring), logging AWS Storage "“ EBS, EFS, S3, Glacier, Adding the disk, extending the disk. AWS backup and restoration Strong understanding of networking concepts to create VPC, Subnets, ACL, Security Groups, and security best practices in cloud environments. Knowledge of PaaS to IaaS migration strategies Scripting experience (must be fluent in a scripting language such as Python) Detail-oriented self-starter capable of working independently. Knowledge of IaaC Terraform and best practice. Experience with container orchestration utilizing ECS, EKS, Kubernetes, or Docker Swarm Experience with one or more of the following Configuration Management ToolsAnsible, Chef, Salt, Puppet infrastructure, networking, AWS databases. Familiarity with containerization and orchestration tools, such as Docker and Kubernetes. Bachelor"™s degree in computer science or a related field Any of the AWS Associate Certifications GCP Knowledge Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Ideal Mindset Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Listener. You listen to the needs of the customer and make those the priority throughout development.

Posted 6 days ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Noida, Bengaluru

Work from Office

Naukri logo

Req ID: 304647 We are currently seeking a AWS Lead Engineer to join our team in Remote, Karntaka (IN-KA), India (IN). Basic Qualifications 3 years' experience in AWS Infra Preferred Experience Excellent communication and collaboration skills. AWS certifications are preferred. Expertise in AWS cloud EC2, creating, Managing, Patching, trouble shooting. Good Knowledge on Access and Identity Management Monitoring Tools - CloudWatch (New Relic/other monitoring), logging AWS Storage "“ EBS, EFS, S3, Glacier, Adding the disk, extending the disk. AWS backup and restoration Strong understanding of networking concepts to create VPC, Subnets, ACL, Security Groups, and security best practices in cloud environments. Strong knowledge of PaaS to IaaS migration strategies Scripting experience (must be fluent in a scripting language such as Python) Detail-oriented self-starter capable of working independently. Knowledge of IaaC Terraform and best practice. Experience with container orchestration utilizing ECS, EKS, Kubernetes, or Docker Swarm Experience with one or more of the following Configuration Management ToolsAnsible, Chef, Salt, Puppet infrastructure, networking, AWS databases. Familiarity with containerization and orchestration tools, such as Docker and Kubernetes. Bachelor"™s degree in computer science or a related field Any of the AWS Associate Certifications Ideal Mindset Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Listener. You listen to the needs of the customer and make those the priority throughout development.

Posted 6 days ago

Apply

1.0 - 6.0 years

1 - 5 Lacs

Noida, Chennai, Bengaluru

Work from Office

Naukri logo

Req ID: 328301 We are currently seeking a AWS Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Title: Digital Engineering Sr Associate NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Basic Qualifications 1 years' experience in AWS Infra Preferred Experience Excellent communication and collaboration skills. AWS certifications are preferred. Expertise in AWS cloud EC2, creating, Managing, Patching, trouble shooting. Good Knowledge on Access and Identity Management Monitoring Tools - CloudWatch (New Relic/other monitoring), logging AWS Storage "“ EBS, EFS, S3, Glacier, Adding the disk, extending the disk. AWS backup and restoration Strong understanding of networking concepts to create VPC, Subnets, ACL, Security Groups, and security best practices in cloud environments. Knowledge of PaaS to IaaS migration strategies Scripting experience (must be fluent in a scripting language such as Python) Detail-oriented self-starter capable of working independently. Knowledge of IaaC Terraform and best practice. Experience with container orchestration utilizing ECS, EKS, Kubernetes, or Docker Swarm Experience with one or more of the following Configuration Management ToolsAnsible, Chef, Salt, Puppet infrastructure, networking, AWS databases. Familiarity with containerization and orchestration tools, such as Docker and Kubernetes. Bachelor"™s degree in computer science or a related field Any of the AWS Associate Certifications GCP Knowledge Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Ideal Mindset Lifelong Learner. You are always seeking to improve your technical and nontechnical skills. Team Player. You are someone who wants to see everyone on the team succeed and is willing to go the extra mile to help a teammate in need. Listener. You listen to the needs of the customer and make those the priority throughout development.

Posted 6 days ago

Apply

7.0 - 12.0 years

16 - 20 Lacs

Pune

Work from Office

Naukri logo

Req ID: 301930 We are currently seeking a Digital Solution Architect Lead Advisor to join our team in Pune, Mahrshtra (IN-MH), India (IN). Position Overview We are seeking a highly skilled and experienced Data Solution Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS - Design and implement data streaming pipelines using Kafka/Confluent Kafka - Develop data processing applications using Python - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Provide technical leadership and mentorship to development teams - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Proficiency in Kafka/Confluent Kafka and Python - Experience with Synk for security scanning and vulnerability management - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders

Posted 6 days ago

Apply

7.0 - 12.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

We are currently seeking a Lead Data Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Data Architect to join our dynamic team. The ideal candidate will have a strong background in designing and implementing data solutions using AWS infrastructure and a variety of core and supplementary technologies. This role requires a deep understanding of data architecture, cloud services, and the ability to drive innovative solutions to meet business needs. Key Responsibilities - Architect end-to-end data solutions using AWS services, including Lambda, SNS, S3, and EKS, Kafka and Confluent, all within a larger and overarching programme ecosystem - Architect data processing applications using Python, Kafka, Confluent Cloud and AWS - Ensure data security and compliance throughout the architecture - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions - Optimize data flows for performance, cost-efficiency, and scalability - Implement data governance and quality control measures - Ensure delivery of CI, CD and IaC for NTT tooling, and as templates for downstream teams - Provide technical leadership and mentorship to development teams and lead engineers - Stay current with emerging technologies and industry trends Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 7+ years of experience in data architecture and engineering - Strong expertise in AWS cloud services, particularly Lambda, SNS, S3, and EKS - Strong experience with Confluent - Strong experience in Kafka - Solid understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts to both technical and non-technical stakeholders - Knowledge of Apache Airflow for data orchestration Preferred Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with Terraform - Deep experience with CI/CD pipelines - Strong understanding of the JVM language family - Understanding of GDPR and the correct handling of PII - Expertise with technical interface design - Use of Docker Responsibilities - Design and implement scalable data architectures using AWS services, Confluent and Kafka - Develop data ingestion, processing, and storage solutions using Python and AWS Lambda, Confluent and Kafka - Ensure data security and implement best practices using tools like Synk - Optimize data pipelines for performance and cost-efficiency - Collaborate with data scientists and analysts to enable efficient data access and analysis - Implement data governance policies and procedures - Provide technical guidance and mentorship to junior team members - Evaluate and recommend new technologies to improve data architecture

Posted 6 days ago

Apply

5.0 - 10.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Req ID: 306669 We are currently seeking a Lead Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Lead Data/Product Engineer to join our dynamic team. The ideal candidate will have a strong background in streaming services and AWS cloud technology, leading teams and directing engineering workloads. This is an opportunity to work on the core systems supporting multiple secondary teams, so a history in software engineering and interface design would be an advantage. Key Responsibilities Lead and direct a small team of engineers engaged in - Engineering reuseable assets for the later build of data products - Building foundational integrations with Kafka, Confluent Cloud and AWS - Integrating with a large number of upstream and downstream technologies - Providing best in class documentation for downstream teams to develop, test and run data products built using our tools - Testing our tooling, and providing a framework for downstream teams to test their utilisation of our products - Helping to deliver CI, CD and IaC for both our own tooling, and as templates for downstream teams Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 5+ years of experience in data engineering - 3+ years of experience with real time (or near real time) streaming systems - 2+ years of experience leading a team of data engineers - A willingness to independently learn a high number of new technologies and to lead a team in learning new technologies - Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway - Strong experience with Python - Strong experience in Kafka - Excellent understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts both directly and through documentation - Strong use of version control and proven ability to govern a team in the best practice use of version control - Strong understanding of Agile and proven ability to govern a team in the best practice use of Agile methodologies Preferred Skills and Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with terraform - Experience with CI pipelines - Ability to code in a JVM language - Understanding of GDPR and the correct handling of PII - Knowledge of technical interface design - Basic use of Docker

Posted 6 days ago

Apply

10.0 - 15.0 years

40 - 60 Lacs

Bengaluru

Hybrid

Naukri logo

How will you make a difference? We are seeking a collaborative and highly motivated Principal AI Architect to lead our AI team, drive innovation, and enhance customer experiences through impactful artificial intelligence solutions. As a member of the Wabtec IT Data & Analytics (DnA) Team, you will be responsible for: Providing strategic leadership and direction in the development, articulation and implementation, of a comprehensive AI/ML/ Data Transformation Roadmap for Wabtec aligned with the overall business objectives. Working with other AI champions in Wabtec evaluating AI tools/ technologies/ frameworks, champion adoption in different projects and demonstrate value for business and customers. Actively collaborating with various stakeholders to align AI initiatives in Cloud Computing environments (e.g., AWS, Azure, OCI) with business goals. Providing technical oversight on AI projects to drive performance output to meet KPI metrics in Productivity and Quality. Serving as contact and interface with external partners and industry leaders for collaborations in AI/LLM/ Generative AI. Architecting and deploying scalable AI solutions that integrate seamlessly with existing business and IT infrastructure. Design and architect AI as a service, to enable collaboration btw multiple teams in delivering AI solutions Optimizing state-of-the-art algorithms in distributed environments Create clear and concise communications/recommendations for senior leadership review related to AI strategic business plans and initiatives. Staying abreast of advancements in AI, machine learning, and data science to continuously innovate and improve solutions and bring the external best practices for adoption in Wabtec Implementing best practices for AI designing, testing, deployment, and maintenance Diving deep into complex business problems and immerse yourself in Wabtec data & outcomes. Mentoring a team of data scientists, fostering growth and performance. Developing AI governance frameworks with ethical AI practices and ensuring compliance with data protection regulations and ensuring responsible AI development. What do we want to know about you? You must have: The minimum qualifications for this role include: Ph.D., M.S., or Bachelor's degree in Statistics, Machine Learning, Operations Research, Computer Science, Economics, or a related quantitative field 5+ years of experience developing and supporting AI products in a production environment with 12+ years of proven relevant experience 8+ years of experience managing and leading data science teams initiatives at enterprise level Profound knowledge of modern AI and Generative AI technologies Extensive experience in designing, implementing, and maintaining AI systems End-to-end expertise in AI/ML project lifecycle, from conception to large-scale production deployment Proven track record as an Architect with cloud computing environments (e.g., AWS, Azure, OCI) and distributed computing platforms, including containerized deployments using technologies such as Amazon EKS (Elastic Kubernetes Service) Expertise with Hands-On experience into Python, AWS AI tech-stack (Bedrock Services, Foundation models, Textract, Kendra, Knowledge Bases, Guard rails, Agents etc.), ML Flow, Image Processing, NLP/Deep Learning, PyTorch /TensorFlow, LLMs integration with applications. Preferred qualifications for this role include: Proven track record in building and leading high-performance AI teams, with expertise in hiring, coaching, and developing engineering leaders, data scientists, and ML engineers Demonstrated ability to align team vision with strategic business goals, driving impactful outcomes across complex product suites for diverse, global customers Strong stakeholder management skills, adept at influencing and unifying cross-functional teams to achieve successful project outcomes Extensive hands-on experience with enterprise-level Python development, PyData stack, Big Data technologies, and machine learning model deployment at scale Proficiency in cutting-edge AI technologies, including generative AI, open-source frameworks, and third-party solutions (e.g., OpenAI) Mastery of data science infrastructure and tools, including code versioning (Git), containerization (Docker), and modern AI/ML tech stacks Preferred: AWS with AWS AI services. We would love it if you had: Fluent with experimental design and the ability to identify, compute and validate the appropriate metrics to measure success Demonstrated success working in a highly collaborative technical environment (e.g., code sharing, using revision control, contributing to team discussions/workshops, and collaborative documentation) Passion and aptitude for turning complex business problems into concrete hypotheses that can be answered through rigorous data analysis and experimentation Deep expertise in analytical storytelling and stellar communications skills Demonstrated success mentoring junior teammates & helping develop peers What will your typical day look like? Stakeholder Engagement: Collaborate with our Internal stakeholders to understand their needs, update on a specific project progress, and align our AI initiatives with business goals. Use Generative AI and machine learning techniques and build LLM Models & fine-tuning, Image processing, NLP, model integration with new/existing applications, and improve model performance/accuracy along with cost effective solutions. Support AI Team: Guide and mentor the AI team, resolving technical issues and provide suggestions. Reporting & Strategy: Generate and present reports to senior leadership, develop strategic insights, and stay updated on industry trends. Building AI roadmap for Wabtec and discussion with senior leadership Training, Development & Compliance: Organize training sessions, manage resources efficiently, ensure data accuracy, security, and compliance with best practices.

Posted 6 days ago

Apply

6.0 - 10.0 years

6 - 10 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

About The Role We are seeking a skilled Infrastructure Support Engineer to join our team. The ideal candidate will have a strong background in managing and supporting on-premises infrastructure, specifically ESXi and VxRail systems, as well as experience with AWS cloud environments. This role requires a proactive approach to system monitoring, troubleshooting, and maintenance, ensuring optimal performance and reliability of our infrastructure. Key Responsibilities: On-Premises Support Manage and support VMware ESXi environments using vSphere, including installation, configuration, and troubleshooting. Install VMware ESXi hypervisors on physical servers. Configure networking, storage, and resource pools for optimal performance. Set up and manage vCenter Server for centralized management of ESXi hosts. Diagnose and resolve issues related to ESXi host performance, connectivity, and VM operation. Use VMware logs and diagnostic tools to identify problems and implement corrective actions. Perform regular health checks and maintenance to ensure optimal performance. Set up monitoring tools to track performance metrics of VMs and hosts, including CPU, memory, disk I/O, and network usage. Identify bottlenecks and inefficiencies in the infrastructure and take corrective action. Generate reports on system performance for management review. Design and implement backup strategies using VMware vSphere Data Protection or third-party solutions (e.g., Veeam, Commvault). Schedule regular backups of VMs and critical data to ensure data integrity and recoverability. Test backup and restoration processes periodically to verify effectiveness. Will be involved in L1 support on rotation Primary Skills AWS Support: Assist in the design, deployment, and management of AWS infrastructure. Monitor AWS resources, ensuring performance, cost-efficiency, and compliance with best practices. Troubleshoot issues related to AWS services (EC2, S3, RDS, etc.) and provide solutions. Collaborate with development teams to support application deployments in AWS environments. Qualifications: - Bachelor's degree in Computer Science or related field. - 5+ years of experience in infrastructure support, specializing in VMware (ESXi), vSphere, and VxRail. - Proven expertise in Linux administration. - Proficient in memory, disk, and CPU monitoring and management. - In-depth understanding of SAN, NFS, NAS. - Thorough knowledge of AWS services (Security/IAM) and architecture. - Skilled in scripting and automation tools (PowerShell, Python, AWS CLI). - Hands-on experience with containerization concepts. - Kubernetes, AWS EKS experience required. - Familiarity with networking concepts, security protocols, and best practices. - Windows administration preferred - Redhat VM / Nutanix Virtualization preferred - Strong problem-solving abilities and ability to work under pressure. - Excellent communication skills and collaborative mindset.

Posted 6 days ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Pune

Work from Office

Naukri logo

About The Role Role Purpose The purpose of this role is to provide significant technical expertise in architecture planning and design of the concerned tower (platform, database, middleware, backup etc) as well as managing its day-to-day operations ? Do Provide adequate support in architecture planning, migration & installation for new projects in own tower (platform/dbase/ middleware/ backup) Lead the structural/ architectural design of a platform/ middleware/ database/ back up etc. according to various system requirements to ensure a highly scalable and extensible solution Conduct technology capacity planning by reviewing the current and future requirements Utilize and leverage the new features of all underlying technologies to ensure smooth functioning of the installed databases and applications/ platforms, as applicable Strategize & implement disaster recovery plans and create and implement backup and recovery plans Manage the day-to-day operations of the tower Manage day-to-day operations by troubleshooting any issues, conducting root cause analysis (RCA) and developing fixes to avoid similar issues. Plan for and manage upgradations, migration, maintenance, backup, installation and configuration functions for own tower Review the technical performance of own tower and deploy ways to improve efficiency, fine tune performance and reduce performance challenges Develop shift roster for the team to ensure no disruption in the tower Create and update SOPs, Data Responsibility Matrices, operations manuals, daily test plans, data architecture guidance etc. Provide weekly status reports to the client leadership team, internal stakeholders on database activities w.r.t. progress, updates, status, and next steps Leverage technology to develop Service Improvement Plan (SIP) through automation and other initiatives for higher efficiency and effectiveness ? Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipro’s standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation ? Deliver NoPerformance ParameterMeasure1Operations of the towerSLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management2New projectsTimely delivery Avoid unauthorised changes No formal escalations ? Mandatory Skills: AWS EKS Admin. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Bharuch

Work from Office

Naukri logo

About The Role : Role Purpose Required Skills: 5+Years of experience in system administration, application development, infrastructure development or related areas 5+ years of experience with programming in languages like Javascript, Python, PHP, Go, Java or Ruby 3+ years of in reading, understanding and writing code in the same 3+years Mastery of infrastructure automation technologies (like Terraform, Code Deploy, Puppet, Ansible, Chef) 3+years expertise in container/container-fleet-orchestration technologies (like Kubernetes, Openshift, AKS, EKS, Docker, Vagrant, etcd, zookeeper) 5+ years Cloud and container native Linux administration /build/ management skills Key Responsibilities: Hands-on design, analysis, development and troubleshooting of highly-distributed large-scale production systems and event-driven, cloud-based services Primarily Linux Administration, managing a fleet of Linux and Windows VMs as part of the application solutions Involved in Pull Requests for site reliability goals Advocate IaC (Infrastructure as Code) and CaC (Configuration as Code) practices within Honeywell HCE Ownership of reliability, up time, system security, cost, operations, capacity and performance-analysisMonitor and report on service level objectives for a given applications services. Work with the business, Technology teams and product owners to establish key service level indicators. Ensuring the repeatability, traceability, and transparency of our infrastructure automation Support on-call rotations for operational duties that have not been addressed with automation Support healthy software development practices, including complying with the chosen software development methodology (Agile, or alternatives), building standards for code reviews, work packaging, etc. Create and maintain monitoring technologies and processes that improve the visibility to our applications'' performance and business metrics and keep operational workload in-check. Partnering with security engineers and developing plans and automation to aggressively and safely respond to new risks and vulnerabilities. Develop, communicate, collaborate, and monitor standard processes to promote the long-term health and sustainability of operational development tasks.

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

Naukri logo

About The Role : AWS Cloud Engineer Required Skills and Qualifications: 4-7 years of hands-on experience with AWS services, including EC2, S3, Lambda, ECS, EKS, and RDS/DynamoDB, API Gateway. Strong working knowledge of Python, JavaScript. Strong experience with Terraform for infrastructure as code. Expertise in defining and managing IAM roles, policies, and configurations . Experience with networking, security, and monitoring within AWS environments. Experience with containerization technologies such as Docker and orchestration tools like Kubernetes (EKS) . Strong analytical, troubleshooting, and problem-solving skills. Experience with AI/ML technologies and Services like Textract will be preferred. AWS Certifications ( AWS Developer, Machine Learning - Specialty ) are a plus. Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback2Self- ManagementProductivity, efficiency, absenteeism, Training Hours, No of technical training completed

Posted 1 week ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Foundit logo

Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant, Automation Test Lead! Responsibilities . Understand the need of the requirement beyond its face value, design a proper machine-executable automation solution using Python scripts. . you will be getting the requirement of Business Rules or Automation test scenario from business or QA team to Automate using Python and SQL, you will not be responsible for writing test case. . Implement the re-useable solution following best practice, and delivery the automation results on time. . Maintaining, troubleshooting, and optimise existing solution . Collaborate with various disciplinary teams to align automation solution to boarder engineering community. . Documentation. . Lead, coordinate and guide the ETL Manual and automation testers. You may get a change to learn new technologies as well on cloud. Tech Stack (as of now) 1. Redshift 2. Aurora (postgresql) 3. S3 object storage 4. EKS / ECR 5. SQS/SNS 6. Roles/Policies 7. Argo 8. Robot Framework 9. Nested JSON Qualifications we seek in you! Minimum Qualifications 1. Python scripting. Candidate should be strong on python programming design / Pandas / processes / http requests like protocols 2. SQL technologies. (best in postgresql ) : OLTP/ OLAP / Join/Group/aggregation/windows functions etc. 3. Windows / Linux Operation systems basic command knowledge 4. Git usage. understand version control systems, concepts like git branch/pull request/ commit / rebase/ merge 6. SQL Optimization knowledge is plus 7. Good understand and experience in data structure related work. Preferred Qualifications Good to Have as Python code to be deploy using these framework 1. Docker is a plus. understanding about the images/container concepts. 2. Kubernetes is a plus. understanding the concepts and theory of the k8s, especially pods / env etc. 3. Argo workflow / airflow is a plus. 4. Robot Framework is a plus. 5. Kafka is a plus. understand the concept for kafka, and event driven method. Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

6.0 - 10.0 years

13 - 17 Lacs

Mumbai, Pune

Work from Office

Naukri logo

Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office]

Posted 1 week ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

You as a DevOps engineer share your expertise in implementing, maintaining automated build, deployment pipelines, optimizing build times and resource usage. You will contribute in CI/CD methodologies and Git branching strategies. You have: Graduate or Postgraduate in Engineering with 4+ years of experience in DevOps and CICD pipelines. Experience in Docker, Kubernetes (EKS), OpenShift. Software development experience using Python / Groovy / Shell. Experience in designing and implementing CI/CD pipelines. Experience working with Git technology and understanding of Git branching strategies. It would be nice if you also have: Knowledge to AI/ML algorithms. Knowledge inYocto, Jenkins, Gerrit, distCC and ZUUL. You will leverage experience in Yocto, Jenkins, Gerrit, and other build tools to streamline and optimize the build process. You will proactively monitor build pipelines, investigate failures, and implement solutions to improve reliability and efficiency. You will utilize AI/ML algorithms to automate and optimize data-driven pipelines, improving data processing and analysis. You willwork closely with the team to understand their needs and contribute to a collaborative and efficient work environment. Actively participate in knowledge sharing sessions and contribute to the team's overall understanding of best practices and innovative solutions. You will learn a culture of continuous improvement, constantly seeking ways to optimize processes and enhance the overall effectiveness of the team.

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune

Remote

Naukri logo

8+ yrs of exp in SRE or related roles. Design, implement, maintain scalable , reliable infra on AWS. Utilize Dynatrace for monitoring, performance tuning, and troubleshooting of applications and services. AWS Ecosystem – EKS, EC2, DynamoDB, Lambda

Posted 1 week ago

Apply

4.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Experience in Modernizing applications to Container based platform using EKS, ECS, Fargate Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux , JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies