Jobs
Interviews

39 Aws Eks Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

6 - 11 Lacs

bengaluru

Work from Office

Is your passion for Cloud Native Platform ? That is, envisioning and building the core services that underpin all Thomson Reuters products? Then we want you on our India -based team! This role is in the Platform Engineering organization where we build the foundational services that power Thomson Reuters products. We focus on the subset of capabilities that help Thomson Reuters deliver digital products to our customers. Our mission is to build a durable competitive advantage for TR by providing building blocks that get value-to-market faster. About the Role In this opportunity as a Senior Software Engineer , you will Establish software engineering best practices; provide tooling that makes compliance frictionless Drive a strong emphasis on test and deployment automation Participate in all aspects of the development lifecycle: Ideation, Design, Build, Test and Operate. We embrace a DevOps culture (you build it, you run it); while we have dedicated 24x7 level-1 support engineers, you may be called on to assist with level-2 support Collaboration with all product teams; transparency is a must! Collaborate with development managers, architects, scrum masters, software engineers, DevOps engineers, product managers and project managers to deliver phenomenal software Ongoing participation in a Scrum team, and an embrace of the agile work model Keep up-to-date with emerging cloud technology trends especially in CNCF landscape About You 4+ years software development experience 2+ years of experience building cloud native infrastructure, applications and services on AWS, Azure or GCP Hands-on experience with Kubernetes , ideally AWS EKS and/or Azure AKS Experience with Istio or other Service Mesh technologies Experience with container security and supply chain security Experience with declarative infrastructure-as-code, CI/CD automation and GitOps Experience with Kubernetes operators written in Golang A Bachelor's Degree in Computer Science, Computer Engineering or similar

Posted 5 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Java, J2EE, Spring, Spring boot, Rest API, Kafka, AWS EKS Developer to join their team in Bangalore, Karnataka, India. As a Full Stack Engineer, you will be responsible for developing high-performance, fault-tolerant, and mission-critical applications. The ideal candidate will have strong hands-on experience in Java, J2EE, Spring, Spring Boot, and Cloud technologies. Key Responsibilities: - Design and develop high-performance applications using Java, J2EE, Kafka, Spring, Spring Boot, and Cloud technologies - Rapid prototyping and developing POCs & POTs with the ability to learn new skills quickly - Building server-side components using REST API and backend SQL/Stored Procedure Components - Working on sophisticated distributed systems, microservices, and message-based frameworks like Kafka - Hands-on experience with AWS EKS and other AWS managed solutions - Deploying applications in a DevOps environment using CI/CD tools - Performance tuning and monitoring using tools like Datadog and Splunk - Experience with Git/Bitbucket Server, Jenkins, and uDeploy - Exposure to FinOps and containerized APIs to AWS - Effective communication and leadership skills to work with different teams and geographies - Anticipating roadblocks, diagnosing problems, and generating effective solutions - Building rapport with partners and stakeholders - Adapting to a changing environment and executing tasks with high quality Minimum Experience Required: 6-9 years General Expectations: 1) Good Communication Skills 2) Willingness to work in a 10:30 AM to 8:30 PM shift 3) Flexibility to work at Client Locations like GV, Manyata, or EGL in Bangalore 4) Ability to work in a hybrid work environment, with full remote work not being an option 5) Full return to office expected by 2026 Pre-Requisites: 1) Genuine and Digitally signed Form16 for ALL employments 2) Employment history/details must be present in UAN/PPF statements 3) Video screening required to ensure genuineness and proper work setup 4) Real work experience on mandatory skills mentioned in JD 5) Notice period of 0 to 3 weeks About NTT DATA: NTT DATA is a global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. As a Global Top Employer, NTT DATA has diverse experts in over 50 countries and a strong partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure, part of the NTT Group investing in R&D to support organizations and society in the digital future. Visit us at us.nttdata.com,

Posted 1 week ago

Apply

5.0 - 7.0 years

8 - 12 Lacs

hyderabad

Hybrid

Location: Hyderabad Kindly send your resume to +91 93619 12009 Job Description: Primary Skills: AI and RAG AWS Python (advanced knowledge) FastAPI, FLASK MLFlow Secondary skills: AWS EKS/ ECS Unit and functional test Terraform, Jenkins Soft skills (communication, collaboration etc.) Mandatory Skills AI and RAG AWS Python (advanced knowledge) FastAPI, FLASK MLFlow Good to have skills AWS EKS/ ECS Unit and functional test Terraform, Jenkins Soft skills (communication, collaboration etc.)

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an experienced DevOps Engineer joining our development team, you will play a crucial role in the evolution of our Platform Orchestration product. Your expertise will be utilized to work on software incorporating cutting-edge technologies and integration frameworks. At our organization, we prioritize staff training, investment, and career growth, ensuring you have the opportunity to enhance your skills and experience through exposure to various software validation techniques and industry-standard engineering processes. Your contributions will include building and maintaining CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices, managing Kubernetes infrastructure (specifically AWS EKS), Helm charts, and service mesh configurations (ISTIO). You will utilize tools like kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Evaluating the security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software will be a key responsibility. Supporting development and QA teams with code merge, build, install, and deployment environments, you will ensure continuous improvement of the software automation pipeline to enhance build and integration efficiency. Additionally, overseeing and maintaining the health of software repositories and build tools, ensuring successful and continuous software builds will be part of your role. Verifying final software release configurations against specifications, architecture, and documentation, as well as performing fulfillment and release activities for timely and reliable deployments, will also fall within your purview. To thrive in this role, we are seeking candidates with a Bachelors or Masters degree in Computer Science, Engineering, or a related field, coupled with 8-12 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services, including EKS, IAM, CloudWatch, S3, and Secrets Manager, is essential. Your expertise with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize, along with experience in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools, will be beneficial. Proficiency in scripting/programming languages such as Ruby, Groovy, and Java is desired, as well as experience with infrastructure provisioning tools like Docker and CloudFormation. In return, we offer an inclusive culture that reflects our core values, providing you with the opportunity to make an impact, develop professionally, and participate in valuable learning experiences. You will benefit from highly competitive compensation, benefits, and rewards programs that recognize and encourage your best work every day. Our engaging work environment promotes work/life balance, offers employee resource groups, and social events to foster interaction and camaraderie. Join us in shaping the future of our Platform Orchestration product and growing your skills in a supportive and dynamic team environment.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

As an Application Support Engineer at JIFFY.ai, you will be responsible for ensuring operational excellence across customer-facing applications and backend services built on a modern microservices architecture. Your role will involve identifying, diagnosing, and resolving production issues, maintaining platform availability, supporting customers in multi-tenant environments, and contributing to Business Continuity Planning (BCP) and Disaster Recovery (DR) readiness. You will act as the first responder for production issues raised by customers or monitoring systems, troubleshoot full-stack issues involving UI, APIs, backend microservices, and AWS cloud services, and participate in incident response, managing escalations, and contributing to root cause analysis and documentation. Collaboration with engineering and DevOps teams to deploy fixes and enhancements will also be a key responsibility. Your duties will include maintaining and improving runbooks, knowledge base articles, and system health dashboards, as well as supporting and monitoring application performance, uptime, and health using tools like Grafana, Prometheus, and OpenSearch. Proactively detecting patterns of recurring issues, recommending long-term solutions or automation, and assisting in post-deployment validations will be part of your role. Additionally, you will enforce budget controls per tenant/app based on customer SLAs and usage and ensure compliance with change control processes during critical deployments and rollback scenarios. To qualify for this role, you should have at least 2-3 years of experience in application support, cloud operations, or site reliability roles. Experience debugging issues in microservices written in Go, Java, Node.js, or Python is essential, along with a strong understanding of web protocols, familiarity with AWS services, competence in Linux system administration, and hands-on experience with monitoring/logging tools. Excellent problem-solving and incident resolution skills, as well as strong written and verbal communication, are also required. Nice-to-have skills include exposure to Kubernetes-based deployments, experience with CI/CD pipelines, scripting skills for automation tasks, and familiarity with incident management frameworks and support SLAs. Joining the team at JIFFY.ai offers you the opportunity to work with modern technologies, solve real-world problems in enterprise automation, and benefit from a learning-focused culture with technical mentorship and career development opportunities. You will collaborate on-site at the Technopark, Trivandrum office, and enjoy a fast-paced work environment.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Technology Specialist at Standard Chartered in Bangalore, IN, you will play a crucial role in analysing business problems and providing technically advanced solutions. Your ability to think out-of-the-box and foster innovation and automation will be key in establishing a strong team-player approach to problem-solving. With a solid foundation in Algorithms, Data Structures, OOPs concepts, and frameworks, you will continuously strive to learn and adapt to new technologies and frameworks. Your empowered mindset will drive you to ask questions and seek clarifications, while your excellent communication skills will enable seamless interactions with colleagues globally. You will demonstrate strong technical skills, including exposure to coding in next-gen technologies and an awareness of Agile methodologies. Additionally, your technical expertise should encompass object-oriented programming, preferably Java, modern technologies like Microservices and UI frameworks such as Angular and React, as well as applied Maths and algorithm, AI/NLP/Machine Learning algorithms. In this role, you will lead a team of Developers/Senior Developers, guiding them in various activities such as Development, testing, Testing Support, Implementation, and Post-Implementation support. Proactively managing risk and keeping stakeholders informed will be essential, along with displaying exemplary conduct in line with the Group's Values and Code of Conduct. Your responsibilities will also include adherence to Risk & Data Quality Management Requirements, continuous risk management of the Trade Application System, and awareness of regulatory requirements in platform design. As part of the Build & Maintenance Model, you will support production as and when required, embedding the Group's brand and values within the team. Your skills and experience in Microservices (OCP, Kubernetes), Hadoop, SPARK, SCALA, Elastic, Trade Risk & AML, Azure DevOps, traditional ETL pipelines, and analytics pipelines will be crucial. Optional experiences in Machine Learning/AI and certifications in QUANTEXA will be advantageous. Proficiency in languages such as AWS EKS, Azure AKS, Angular, Microservices (OCP, Kubernetes), Hadoop, SPARK, SCALA, and Elastic will further enhance your qualifications. Standard Chartered is an international bank committed to making a positive difference for clients, communities, and employees. If you are seeking a purposeful career with a bank that values diversity and inclusion, we invite you to join us and contribute to driving commerce and prosperity through our unique diversity. Together, we uphold our valued behaviours, challenge each other, strive for continuous improvement, and work collaboratively to build for the long term. In return, we offer a range of benefits, including retirement savings, medical and life insurance, flexible working options, proactive wellbeing support, continuous learning opportunities, and an inclusive work environment where everyone feels respected and can realize their full potential. Join us at Standard Chartered and be part of a team that celebrates uniqueness and advocates inclusion, while working towards a common goal of making a positive impact in the world. Visit www.sc.com/careers to explore career opportunities with us.,

Posted 2 weeks ago

Apply

7.0 - 12.0 years

10 - 15 Lacs

bengaluru

Hybrid

Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 7+ years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

bengaluru

Hybrid

Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 46 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

25 - 32 Lacs

chennai, bengaluru

Work from Office

Hiring Cloud Engineers for an 8-month contract role based in Chennai or Bangalore with hybrid/remote flexibility. The ideal candidate will have 8+ years of IT experience, including 4+ years in AWS cloud migrations, with strong hands-on expertise in AWS MGN, EC2, EKS, Terraform, and scripting using Python or Shell. Responsibilities include leading lift-and-shift migrations, automating infrastructure, migrating storage to EBS, S3, EFS, and modernizing legacy applications. AWS/Terraform certifications and experience in monolithic and microservices architectures are preferred

Posted 2 weeks ago

Apply

8.0 - 10.0 years

25 - 35 Lacs

mangaluru

Hybrid

Work Mode: Hybrid (3 days in Mangalore office of which Tuesday/ Wednesday compulsory) Experience: 8-10 years of full-time experience in software engineering or related domains. Proven ability to design and deliver features autonomously, handling all aspects from concept to maintenance. Education: Bachelor's or Masters degree in Computer Science or a related field. The Role This role demands a versatile software professional with strong leadership qualities, a deep sense of ownership, and the ability to tackle challenges across the full stack. Required Skills: Technical Expertise: Proficiency in one or more of the following technologies: Frontend: ReactJS Backend & Serverless: AWS Cognito, GraphQL API (AWS AppSync), AWS Lambda Microservices: Event-driven architecture, AWS EKS, TypeScript/Node.js Database: Aurora Postgres, AWS QLDB Infrastructure: Infrastructure as Code (Terraform, Terragrunt) Familiarity with setting up foundational frameworks and tech stacks from scratch. Key Responsibilities: Build scalable and customer-centric software solutions aligned with business goals. Design, develop, test, deploy, and maintain features across the product lifecycle. Take full ownership of your workfrom design and implementation to release and maintenance. Work in a hybrid mode, with in-office collaboration required on Tuesdays and Wednesdays.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

15 - 16 Lacs

mohali

Work from Office

About the Role We are seeking a highly skilled Sr. Site Reliability Engineer (SRE) to lead the implementation, optimization, and management of our observability stack across cloud infrastructure. You will play a key role in ensuring the reliability, scalability, and performance of our platform, spanning microservices on Kubernetes/EC2 and mission-critical systems. This role requires strong problem-solving, automation mindset, and a proactive approach to incident management. Key Responsibilities Design, implement, and manage monitoring, logging, and alerting systems across production and non-production environments. Lead incident response, root cause analysis, and post-mortem practices for continuous improvement. Define and implement disaster recovery strategies with regular testing. Collaborate with development teams to define and track SLAs/SLOs for critical services. Optimize AWS cloud infrastructure for cost efficiency, reliability, and scalability. Build and maintain automation frameworks for deployment, scaling, and recovery using Terraform, GitLab CI/CD, and Kubernetes. Administer Kubernetes clusters, troubleshoot performance bottlenecks, and ensure high availability. Manage databases (PostgreSQL or similar), including replication and disaster recovery strategies. Contribute to infrastructure security, compliance, and best practices. Participate in the on-call rotation and handle high-priority incidents under pressure. Required Skills & Experience 4+ years of experience as an SRE, DevOps, or similar role. Strong hands-on experience with AWS services: EC2, EKS, RDS, Cognito, CloudWatch, etc. Proven expertise in Kubernetes administration in production environments. Proficiency in scripting/programming: Python, Bash, Chef (recipes, cookbooks), Ansible. Strong knowledge of Infrastructure as Code (Terraform/CloudFormation). Deep experience with observability tools: Prometheus, Grafana, ELK stack, distributed tracing. Database administration experience with PostgreSQL or similar systems. Understanding of network protocols, load balancing, and security best practices. Experience in CI/CD pipelines and GitOps workflows. Ability to handle multiple incidents and prioritize effectively under pressure. Exposure to monitoring solutions like Splunk, Datadog, Dynatrace. Preferred Qualifications AWS Certified Solutions Architect or AWS DevOps Engineer certification. Certified Kubernetes Administrator (CKA). Why Join Us Be part of a fast-growing HealthTech startup transforming healthcare technology. Work with modern tools, cutting-edge infrastructure, and a collaborative team. Opportunity to own end-to-end infrastructure reliability and automation. Competitive salary and growth opportunities.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

15 - 16 Lacs

mohali

Work from Office

About the Role We are seeking a highly skilled Sr. Site Reliability Engineer (SRE) to lead the implementation, optimization, and management of our observability stack across cloud infrastructure. You will play a key role in ensuring the reliability, scalability, and performance of our platform, spanning microservices on Kubernetes/EC2 and mission-critical systems. This role requires strong problem-solving, automation mindset, and a proactive approach to incident management. Key Responsibilities Design, implement, and manage monitoring, logging, and alerting systems across production and non-production environments. Lead incident response, root cause analysis, and post-mortem practices for continuous improvement. Define and implement disaster recovery strategies with regular testing. Collaborate with development teams to define and track SLAs/SLOs for critical services. Optimize AWS cloud infrastructure for cost efficiency, reliability, and scalability. Build and maintain automation frameworks for deployment, scaling, and recovery using Terraform, GitLab CI/CD, and Kubernetes. Administer Kubernetes clusters, troubleshoot performance bottlenecks, and ensure high availability. Manage databases (PostgreSQL or similar), including replication and disaster recovery strategies. Contribute to infrastructure security, compliance, and best practices. Participate in the on-call rotation and handle high-priority incidents under pressure. Required Skills & Experience 4+ years of experience as an SRE, DevOps, or similar role. Strong hands-on experience with AWS services: EC2, EKS, RDS, Cognito, CloudWatch, etc. Proven expertise in Kubernetes administration in production environments. Proficiency in scripting/programming: Python, Bash, Chef (recipes, cookbooks), Ansible. Strong knowledge of Infrastructure as Code (Terraform/CloudFormation). Deep experience with observability tools: Prometheus, Grafana, ELK stack, distributed tracing. Database administration experience with PostgreSQL or similar systems. Understanding of network protocols, load balancing, and security best practices. Experience in CI/CD pipelines and GitOps workflows. Ability to handle multiple incidents and prioritize effectively under pressure. Exposure to monitoring solutions like Splunk, Datadog, Dynatrace. Preferred Qualifications AWS Certified Solutions Architect or AWS DevOps Engineer certification. Certified Kubernetes Administrator (CKA). Why Join Us Be part of a fast-growing HealthTech startup transforming healthcare technology. Work with modern tools, cutting-edge infrastructure, and a collaborative team. Opportunity to own end-to-end infrastructure reliability and automation. Competitive salary and growth opportunities.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As a DevOps Engineer, you will play a crucial role in building and maintaining CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices. You will be responsible for managing Kubernetes infrastructure (AWS EKS), Helm charts, and service mesh configurations (ISTIO). Your expertise will be utilized in utilizing tools like kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Your main focus will include evaluating the security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software. You will support development and QA teams with code merge, build, install, and deployment environments. Additionally, you will ensure the continuous improvement of the software automation pipeline to enhance build and integration efficiency. Monitoring and maintaining the health of software repositories and build tools will also be part of your responsibilities. You will be required to verify final software release configurations, ensuring integrity against specifications, architecture, and documentation. Your role will involve performing fulfillment and release activities to ensure timely and reliable deployments. To be successful in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You should have 8-12 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services (EKS, IAM, CloudWatch, S3, Secrets Manager), including networking and security components, is essential. Strong experience with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize is required. You should have expertise in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools. Hands-on experience with infrastructure provisioning tools such as Docker and CloudFormation is preferred. Familiarity with CI/CD pipeline tools and build systems including Jenkins and Maven is a plus. Experience administering software repositories such as Git or Bitbucket is beneficial. Proficiency in scripting/programming languages such as Ruby, Groovy, and Java is desired. You should have a proven ability to analyze and resolve issues related to performance, scalability, and reliability. A solid understanding of DNS, Load Balancing, SSL, TCP/IP, and general networking and security best practices will be advantageous in this role.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

telangana

On-site

You will be responsible for designing and building backend components of our MLOps platform in Python on AWS. This includes collaborating with geographically distributed cross-functional teams and participating in an on-call rotation with the rest of the team to handle production incidents. To be successful in this role, you should have at least 3+ years of professional backend development experience with Python. Additionally, you should have experience with web development frameworks such as Flask or FastAPI, as well as working with WSGI & ASGI web servers like Gunicorn and Uvicorn. Experience with concurrent programming designs such as AsyncIO, containers (Docker), AWS ECS or AWS EKS, unit and functional testing frameworks, and public cloud platforms like AWS is also required. Nice-to-have skills include experience with Apache Kafka and developing Kafka client applications in Python, MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow, big data processing frameworks like Apache Spark, DevOps & IaC tools such as Terraform and Jenkins, various Python packaging options like Wheel, PEX, or Conda, and metaprogramming techniques in Python. You should hold a Bachelor's degree in Computer Science, Information Systems, Engineering, Computer Applications, or a related field. In addition to competitive salaries and benefits packages, Nisum India offers its employees continuous learning opportunities, parental medical insurance, various activities for team building, and free meals including snacks, dinner, and subsidized lunch.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer - Backend (Python) with over 7 years of experience, you will be based in Hyderabad and play a crucial role in developing the backend components of the GenAI Platform. Your responsibilities will include designing and constructing backend features for the platform on AWS, collaborating with cross-functional teams spread across different locations, and participating in an on-call rotation for managing production incidents. To excel in this role, you must possess the following skills: - A minimum of 7 years of professional experience in backend web development using Python. - Proficiency in AI, RAG, DevOps, and Infrastructure as Code (IaC) tools like Terraform and Jenkins. - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow. - Expertise in web development frameworks like Flask, Django, or FastAPI. - Knowledge of concurrent programming concepts like AsyncIO. - Experience with public cloud platforms such as AWS, Azure, or GCP, preferably AWS. - Understanding of CI/CD practices, tools, and frameworks. Additionally, the following skills would be advantageous: - Experience with Apache Kafka and developing Kafka client applications using Python. - Familiarity with big data processing frameworks, particularly Apache Spark. - Proficiency in containers (Docker) and container platforms like AWS ECS or AWS EKS. - Expertise in unit and functional testing frameworks. - Knowledge of various Python packaging options such as Wheel, PEX, or Conda. - Understanding of metaprogramming techniques in Python. Join our team and contribute to creating a safe, compliant, and efficient access platform for LLMs, leveraging both Opensource and Commercial resources while adhering to Experian standards and policies. Be a part of a dynamic environment where you can utilize your expertise to build innovative solutions and drive the growth of the GenAI Platform.,

Posted 1 month ago

Apply

10.0 - 18.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

Join us as a Performance Testing Specialist in this key role where you will be responsible for undertaking and enabling automated testing activities in all delivery models. You will have the opportunity to support teams in developing quality solutions and ensuring continuous integration for defect-free deployment of customer value. Working in a fast-paced environment, you will gain exposure by closely collaborating with various teams across the bank. This position is offered at the vice president level. As a Quality Automation Specialist, you will play a crucial role in transforming testing processes by utilizing quality processes, tools, and methodologies to enhance control, accuracy, and integrity. Your responsibilities will include ensuring that new sprint deliveries within a release cycle continue to meet Non-Functional Requirements (NFRs) such as response time, throughput rate, and resource consumption. In this collaborative role, you will lead debugging sessions with software providers, hardware providers, and internal teams to investigate findings and develop solutions. Additionally, you will evolve predictive and intelligent testing approaches based on automation and innovative testing products and solutions. You will work closely with your team to define and refine the scope of manual and automated testing, create automated test scripts, user documentation, and artifacts. Your decision-making process will be data-driven, focusing on return on investment and value measures that reflect thoughtful cost management. You will also play a key role in enabling the cross-skilling of colleagues in end-to-end automation testing. To excel in this role, you should have a minimum of twelve years of experience in automated testing, particularly in an Agile development or Continuous Integration/Continuous Delivery (CI/CD) environment. Proficiency in performance testing tools such as LoadRunner, Apache JMeter, or Neoload is essential, as well as experience in AWS EKS containers and microservices architecture. Familiarity with monitoring and analyzing performance tests using tools like Grafana, Jaeger, and Graylog is also required. Moreover, we are seeking candidates with expertise in end-to-end and automation testing using the latest tools recommended by the enterprise tooling framework. A background in designing, developing, and implementing automation frameworks in new environments is highly desirable. Effective communication skills to convey complex technical concepts to management-level colleagues and strong collaboration and stakeholder management skills are essential for success in this role.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As an OpenShift Admin with 6 to 8 years of relevant experience, you will be responsible for building automation to support product development and data analytics initiatives. In this role, you will develop and maintain strong customer relationships to ensure effective service delivery and customer satisfaction. Regular interaction with customers will be essential to refine requirements, gain agreement on solutions and deliverables, provide progress reports, monitor satisfaction levels, identify and resolve concerns, and seek cooperation to achieve mutual objectives. To be successful in this role, you must have a minimum of 6 years of experience as an OpenShift Admin, with expertise in Kubernetes Administration, Automation tools such as Ansible, AWS EKS, Argo CD, and Linux administration. Extensive knowledge and experience with OpenShift and Kubernetes are crucial for this infrastructure-focused position. You should be experienced in deploying new app containers from scratch in OpenShift or Kubernetes, as well as upgrading OpenShift and working with observability in these environments. Additional skills that would be beneficial for this role include experience with Anthos/GKE for Hybrid Cloud, HashiCorp Terraform, and HashiCorp Vault. As an OpenShift Admin, you will be expected to create, maintain, and track designs at both high and detailed levels, identify new technologies for adoption, conduct consistent code reviews, and propose changes where necessary. You will also be responsible for provisioning infrastructure, developing automation scripts, monitoring system performance, integrating security and compliance measures, documenting configurations and processes, and deploying infrastructure as code and applications using automation and orchestration tools. The hiring process for this position will consist of screening rounds conducted by HR, followed by two technical rounds, and a final HR round. If you are someone with a strong background in OpenShift Administration and related technologies, and you are passionate about driving innovation and excellence in infrastructure management, we encourage you to apply for this role in our Pune office.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Veeam, the global market leader in data resilience, envisions that businesses should have full control over their data whenever and wherever they require it. Veeam specializes in providing data resilience solutions encompassing data backup, recovery, portability, security, and intelligence. Headquartered in Seattle, Veeam serves over 550,000 customers worldwide, who rely on Veeam to ensure the continuity of their operations. As we progress together, learning, growing, and creating a significant impact for some of the world's most renowned brands, we present you with an opportunity to be part of this journey. We are in search of a Platform Engineer to join the Veeam Data Cloud team. The primary goal of the Platform Engineering team is to furnish a secure, dependable, and user-friendly platform that facilitates the development, testing, deployment, and monitoring of the VDC product. This role offers an exceptional opportunity for an individual with expertise in cloud infrastructure and software development to contribute to the development of the most successful and advanced data protection platform globally. Your responsibilities will include: - Developing and maintaining code to automate our public cloud infrastructure, software delivery pipeline, enablement tools, and internally consumed platform services - Documenting system design, configurations, processes, and decisions to support our asynchronous, distributed team culture - Collaborating with a team of remote engineers to construct the VDC platform - Utilizing a modern technology stack comprising containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain - Participating in an on-call rotation for product operations Technologies you will work with include: - Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, and more. Qualifications we seek from you: - Minimum of 3 years of experience in production operations for a SaaS or cloud service provider - Proficiency in automating infrastructure through code using tools like Pulumi or Terraform - Familiarity with GitHub Actions and a variety of public cloud services - Background in building and supporting enterprise SaaS products - Understanding of operational excellence principles in a SaaS environment - Proficiency in scripting languages such as Bash or Python - Knowledge and experience in implementing secure design principles in the cloud - Demonstrated ability to quickly learn new technologies and implement them effectively - Strong inclination towards taking action and maintaining direct, frequent communication - A technical degree from a university Desirable qualifications: - Experience with Azure - Proficiency in high-level programming languages like Go, Java, C/C++, etc. In return, we provide: - Family Medical Insurance - Annual flexible spending allowance for health and well-being - Life insurance and personal accident insurance - Employee Assistance Program - Comprehensive leave package, including parental leave - Meal Benefit Pass, Transportation Allowance, and Monthly Daycare Allowance - Veeam Care Days - additional 24 hours for volunteering activities - Professional training and education opportunities, including courses, workshops, internal meetups, and access to online learning platforms - Mentorship through our MentorLab program Please note: Veeam reserves the right to decline applications from candidates permanently located outside India. Veeam Software is dedicated to promoting diversity and equal opportunities and prohibits discrimination based on various factors. All personal data collected during the recruitment process will be handled in accordance with our Recruiting Privacy Notice. By applying for this position, you consent to the processing of your personal data as described in our Recruiting Privacy Notice. Your application and supporting documents should accurately represent your qualifications and experience. Any misrepresentation may lead to disqualification from employment consideration or termination if discovered after employment commences.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

At PwC, the focus in data and analytics revolves around leveraging data to drive insights and make informed business decisions. By utilizing advanced analytics techniques, our team helps clients optimize operations and achieve strategic goals. As a professional in data analysis at PwC, you will specialize in utilizing advanced analytical techniques to extract insights from large datasets, supporting data-driven decision-making. Your role will involve leveraging skills in data manipulation, visualization, and statistical modeling to assist clients in solving complex business problems. PwC US - Acceleration Center is currently seeking individuals with a strong analytical background to join our Analytics Consulting practice. As a Senior Associate, you will be an essential part of business analytics teams in India, collaborating with clients and consultants in the U.S. You will lead teams for high-end analytics consulting engagements and provide business recommendations to project teams. **Years of Experience:** Candidates should possess 4+ years of hands-on experience. **Must Have:** - Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCPs Vertex AI platform, AWS SageMaker) - Knowledge of predictive/prescriptive analytics, particularly in the usage of Log-Log, Log-Linear, Bayesian Regression techniques, Machine Learning algorithms (Supervised and Unsupervised), deep learning algorithms, and Artificial Neural Networks - Good knowledge of statistics, including statistical tests & distributions - Experience in Data analysis, such as data cleansing, standardization, and data preparation for machine learning use cases - Experience in machine learning frameworks and tools (e.g., scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) - Advanced level programming in SQL or Python/Pyspark - Expertise with visualization tools like Tableau, PowerBI, AWS QuickSight, etc. **Nice To Have:** - Working knowledge of containerization (e.g., AWS EKS, Kubernetes), Dockers, and data pipeline orchestration (e.g., Airflow) - Good communication and presentation skills **Roles And Responsibilities:** - Develop and execute project & analysis plans under the guidance of the Project Manager - Interact with and advise consultants/clients in the U.S. as a subject matter expert to formalize data sources, acquire datasets, and clarify data & use cases for a strong understanding of data and business problems - Drive and conduct analysis using advanced analytics tools and mentor junior team members - Implement quality control measures to ensure deliverable integrity - Validate analysis outcomes and recommendations with stakeholders, including the client team - Build storylines and deliver presentations to the client team and/or PwC project leadership team - Contribute to knowledge sharing and firm building activities **Professional And Educational Background:** - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

ahmedabad, gujarat

On-site

As the DevOps Lead, you will be responsible for leading the design, implementation, and management of enterprise Container orchestration platforms using Rafey and Kubernetes. Your role will involve overseeing the onboarding and deployment of applications on Rafey platforms, utilizing AWS EKS and Azure AKS. You will play a key role in developing and maintaining CI/CD pipelines to ensure efficient and reliable application deployment using Azure DevOps. Collaboration with cross-functional teams is essential to ensure seamless integration and operation of containerized applications. Your expertise will also be required to implement and manage infrastructure as code using tools such as Terraform, ensuring the security, reliability, and scalability of containerized applications and infrastructure. In addition to your technical responsibilities, you will be expected to mentor and guide junior DevOps engineers, fostering a culture of continuous improvement and innovation within the team. Monitoring and optimizing system performance, troubleshooting issues, and staying up-to-date with industry trends and best practices are also crucial aspects of this role. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 10+ years of experience in DevOps, with a focus on Container orchestration platforms. - Extensive hands-on experience with Kubernetes, EKS, AKS. Knowledge of Rafey platform is a plus. - Proven track record of onboarding and deploying applications on Kubernetes platforms, including AWS EKS and Azure AKS. - Strong knowledge of Kubernetes manifest files, Ingress, Ingress Controllers, and Azure DevOps CI/CD pipelines. - Proficiency in infrastructure as code tools like Terraform. - Excellent problem-solving skills, knowledge of Secret Management, RBAC configuration, and hands-on experience with Helm Charts. - Strong communication and collaboration skills, experience with cloud platforms (AWS, Azure), and security best practices in a DevOps environment. Preferred Skills: - Strong Cloud knowledge (AWS & Azure) and Kubernetes expertise. - Experience with other enterprise Container orchestration platforms and tools. - Familiarity with monitoring and logging tools like Datadog, understanding of network topology, and system architecture. - Ability to work in a fast-paced, dynamic environment. Good to Have: - Knowledge of Rafey platform (A Kubernetes Management Platform) and hands-on experience with GitOps technology.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a part of our team at our Technology company in Noida, Uttar Pradesh, India, you will play a crucial role in designing secure and scalable systems and applications for various industries using AWS/GCP/Azure or similar services. Your responsibilities will include integrations between different systems/applications/browser and network, as well as the analysis of business requirements for selecting appropriate solutions on the AWS platform. Additionally, you will deliver high-speed, pixel-perfect web applications and stay updated on the latest technology trends, with hands-on experience in modern architecture, Microservices, Containers, Kubernetes, etc. You will be expected to solve design and technical problems, demonstrate proficiency in various design patterns/architectures, and have hands-on experience with the latest tech stack such as MEAN, MERN, Java, Lambdas, etc. Experience with CI/CD and DevOps practices is essential for this role. Communication with customers, both Business and IT, is a key aspect of the position, along with supporting Pre-Sales teams in workshops and offer preparation. Having knowledge of multiple cloud platforms like AWS, GCP, or Azure is advantageous, with at least one being a requirement. Your responsibilities will involve facilitating technical discussions with customers, partners, and internal stakeholders, providing domain expertise around public cloud and enterprise technology, and promoting Google Cloud with customers. Creating and delivering best practice recommendations, tutorials, blog posts, and presentations will be part of your routine to support technical, business, and executive partners. Furthermore, you will provide feedback to product and engineering teams, contribute to the Solutions Go-to-Market team, and ensure timely delivery of high-quality work from team members. To succeed in this role, you should be proficient in a diverse application ecosystem tech stack, including programming languages like JavaScript/Typescript (preferred), HTML, and Java (Spring Boot). Knowledge of microservice architecture, PWA, responsive apps, micro-front end, Docker, Kubernetes, nginx, HA proxy, Jenkins, Loopback, Express, NextJS, NestJS, React/Angular, and data modeling for NoSQL or SQL databases is essential. Experience with cloud equivalents like AWS Cloud Front, AWS Lambda, Azure Cloud Front, Apache Kafka, Git version control, and engineering background in B.Tech/ M.Tech/ PhD are required. Nice-to-have skills include understanding of SQL or NoSQL databases, experience in architectural solutions for the Financial Services domain, working with sales teams to design appropriate solutions, detailed exposure to cloud providers like AWS, GCP, Azure, designing serverless secure web applications, working in a fast-paced startup environment, and certifications from Cloud or Data Solutions Providers like AWS, GCP, etc. Joining our team comes with benefits such as group medical policies, equal employment opportunity, maternity leave, skill development, 100% sponsorship for certification, work-life balance, flexible work hours, and zero leave tracking. If you are passionate about designing cutting-edge solutions and collaborating with clients to unlock the potential of AI, we welcome you to apply for this role and be a part of our dynamic team.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer - Backend (Python) with 7+ years of experience, you will be responsible for designing and building the backend components of the GenAI Platform in Hyderabad. Your role will involve collaborating with geographically distributed cross-functional teams and participating in an on-call rotation to handle production incidents. The GenAI Platform offers safe, compliant, and cost-efficient access to LLMs, including Opensource & Commercial ones, while adhering to Experian standards and policies. You will work on building reusable tools, frameworks, and coding patterns for fine-tuning LLMs or developing RAG-based applications. To succeed in this role, you must possess the following skills: - 7+ years of professional backend web development experience with Python - Experience with AI and RAG - Proficiency in DevOps & IaC tools like Terraform, Jenkins - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow - Expertise in web development frameworks such as Flask, Django, or FastAPI - Knowledge of concurrent programming designs like AsyncIO - Experience with public cloud platforms like AWS, Azure, GCP (preferably AWS) - Understanding of CI/CD practices, tools, and frameworks Additionally, the following skills would be considered nice to have: - Experience with Apache Kafka and developing Kafka client applications in Python - Familiarity with big data processing frameworks, especially Apache Spark - Knowledge of containers (Docker) and container platforms like AWS ECS or AWS EKS - Proficiency in unit and functional testing frameworks - Experience with various Python packaging options such as Wheel, PEX, or Conda - Understanding of metaprogramming techniques in Python Join our team and contribute to the development of cutting-edge technologies in a collaborative and dynamic environment.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a DevOps Engineer, you will play a crucial role in constructing and managing a robust, scalable, and reliable 0-downtime platform. You will be actively involved in a newly initiated greenfield project that utilizes modern infrastructure and automation tools to support our engineering teams. This presents a valuable opportunity to collaborate with an innovative team, fostering a culture of fresh thinking, integrating AI and automation, and contributing to our cloud-native journey. If you are enthusiastic about automation, cloud infrastructure, and delivering high-quality production-grade platforms, this position provides you with the opportunity to create a significant impact. Your primary responsibilities will include: - **Hands-On Development**: Design, implement, and optimize AWS infrastructure by engaging in hands-on development using Infrastructure as Code (IaC) tools. - **Automation & CI/CD**: Develop and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines to automate rapid, secure, and seamless deployments. - **Platform Reliability**: Ensure the high availability, scalability, and resilience of our platform by leveraging managed services. - **Monitoring & Observability**: Implement and oversee proactive observability using tools like DataDog to monitor system health, performance, and security, ensuring prompt issue identification and resolution. - **Cloud Security & Best Practices**: Apply cloud and security best practices, including configuring networking, encryption, secrets management, and identity/access management. - **Continuous Improvement**: Contribute innovative ideas and solutions to enhance our DevOps processes. - **AI & Future Tech**: Explore opportunities to incorporate AI into our DevOps processes and contribute towards AI-driven development. Your experience should encompass proficiency in the following technologies and concepts: - **Tech Stack**: Terraform, Terragrunt, Helm, Python, Bash, AWS (EKS, Lambda, EC2, RDS/Aurora), Linux OS, and Github Actions. - **Strong Expertise**: Hands-on experience with Terraform, IaC principles, CI/CD, and the AWS ecosystem. - **Networking & Cloud Configuration**: Proven experience with Networking (VPC, Subnets, Security Groups, API Gateway, Load Balancing, WAF) and Cloud configuration (Secrets Manager, IAM, KMS). - **Kubernetes & Deployment Strategies**: Comfortable with Kubernetes, ArgoCD, Istio, and deployment strategies like blue/green and canary. - **Cloud Security Services**: Familiarity with Cloud Security services such as Security Hub, Guard Duty, Inspector, and vulnerability observability. - **Observability Mindset**: Strong belief in measuring everything and utilizing tools like DataDog for platform health and security visibility. - **AI Integration**: Experience with embedding AI into DevOps processes is considered advantageous. This role presents an exciting opportunity to contribute to cutting-edge projects, collaborate with a forward-thinking team, and drive innovation in the realm of DevOps engineering.,

Posted 1 month ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Hybrid

Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 46 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies