Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4 - 8 years
5 - 9 Lacs
Pune
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for an experienced AWS Developer responsible for making our app more scalable and reliable. You will containerize our application and migrate it to EKS or other AWS service such as ECS, Lambda, etc. ? at present we are running our services on EC2 machines using Auto Scaling Groups. You will be responsible for setting up a monitoring stack. These metrics will be used for service capacity planning. Additionally, you will update our deployment model to cover automatic rollbacks, short downtime when a new version is deployed to production servers and similar challenges. Migration to the AWS CI /CD stack will also form a part of your responsibilities. What You?ll Do Assist in the rollout and training of resources on utilizing AWS data science support tools and the AWS environment for development squads. Work within the client?s AWS environment to help implement AI / ML model development and data platform architecture Help evaluate, recommend, and assist with installing of cloud-based tools To wrangle data and host and deploy AI models Expertise You?ll Bring A Bachelor?s or master?s degree in science, engineering, mathematics or equivalent experience 5+ years working as a DevOps engineer Strong hands-on working with AWS ? Lambda, S3, or similar tools Working in an AGILE environment Object Oriented Programming (OOP) Relational database ? MySQL preferred Proficiency in containerization tools Linux Shell Scripting ? Bash, Python Git CI / CD using Jenkins Containerization ? Kubernetes, Pivotal Cloud Foundry, or other similar tools Software development process including architectural styles and design patterns Create CI / CD pipelines using Jenkins, Code Build, AWS ECR, and Helm Jenkins ? job as code, infrastructure as code All aspects of provisioning compute resources within the AWS environment Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
4 - 8 years
5 - 9 Lacs
Bengaluru
Work from Office
About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what?s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 14 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,186M annual revenue (13.2% Y-o-Y). Along with our growth, we?ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,850+ people located in 21 countries across the globe. Throughout this market-leading growth, we?ve maintained strong employee satisfaction - over 94% of our employees approve of the CEO and 89% recommend working at Persistent to a friend. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for an experienced AWS Developer responsible for making our app more scalable and reliable. You will containerize our application and migrate it to EKS or other AWS service such as ECS, Lambda, etc. ? at present we are running our services on EC2 machines using Auto Scaling Groups. You will be responsible for setting up a monitoring stack. These metrics will be used for service capacity planning. Additionally, you will update our deployment model to cover automatic rollbacks, short downtime when a new version is deployed to production servers and similar challenges. Migration to the AWS CI /CD stack will also form a part of your responsibilities. What You?ll Do Assist in the rollout and training of resources on utilizing AWS data science support tools and the AWS environment for development squads. Work within the client?s AWS environment to help implement AI / ML model development and data platform architecture Help evaluate, recommend, and assist with installing of cloud-based tools To wrangle data and host and deploy AI models Expertise You?ll Bring A Bachelor?s or master?s degree in science, engineering, mathematics or equivalent experience 5+ years working as a DevOps engineer Strong hands-on working with AWS ? Lambda, S3, or similar tools Working in an AGILE environment Object Oriented Programming (OOP) Relational database ? MySQL preferred Proficiency in containerization tools Linux Shell Scripting ? Bash, Python Git CI / CD using Jenkins Containerization ? Kubernetes, Pivotal Cloud Foundry, or other similar tools Software development process including architectural styles and design patterns Create CI / CD pipelines using Jenkins, Code Build, AWS ECR, and Helm Jenkins ? job as code, infrastructure as code All aspects of provisioning compute resources within the AWS environment Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Chennai, Bengaluru, Coimbatore
Work from Office
Experience in server administration & Cloud Operations specifically with AWS & Azure. Monitor system performance & ensure compliance with requirements. Collaborate with users & vendors to resolve issues, ensuring timely service & root cause analysis. Required Candidate profile Proficiency in IAM, ECS, and EKS for AWS/Azure. Strong expertise with the Windows/Linux platform. Uphold security through access controls, backups on both AWS & Azure Cloud.
Posted 3 months ago
6 - 11 years
0 Lacs
Chennai, Bengaluru, Hyderabad
Hybrid
Role - Java Aws Developer Experience - 6 to 12 years Location - Chennai / Hyderabad / Bangalore / Pune Work timing - 2 PM to 11 PM Work Mode - Hybrid Job Description: 8+ years of Java development experience, with expertise in Java 8/11+, JEE , Spring Boot, and RESTful API development. Strong experience with AWS ECS, Fargate, and cloud migration strategies for enterprise applications. Deep understanding of AWS networking concepts, including VPCs, ALB/NLB, security groups, and IAM policies. Hands-on experience with containerization (Docker, ECS, Kubernetes) and ECS related Infrastructure as Code (AWS CDK). Proficiency in AWS-native databases, including DynamoDB, Amazon RDS (PostgreSQL/MySQL), and caching solutions like ElastiCache (Redis). Familiarity with messaging and event-driven architecture, including SNS, SQS, and AWS EventBridge. Strong expertise in performance tuning, scalability, and distributed system design. Experience implementing security best practices in a cloud environment, including encryption, IAM, and least-privilege access. Proficiency in developing, deploying, and debugging cloud-based applications using AWS security best practices (e.g., not using secret and access keys in the code, instead using IAM roles) Proficient in DevOps concepts, CI/CD tools, and automation frameworks. Experience working in an Agile development environment with Jira, Confluence, and Git-based repositories.
Posted 3 months ago
3 - 6 years
7 - 13 Lacs
Bhubaneshwar, Hyderabad
Hybrid
We are happy to inviting you to apply for an opening in IT MNC for Python Developer (MLOps) in Hyderabad & Bhubaneswar. This is full-time of employment. Working Mode: Hybrid (4 Days WFO, 1 Day WFH) Exp: 3-6 Yrs Notice Period: Immediate - 30 Days Interview Mode: 2 Round Virtual + Final Round F2F in weekdays only Note: Looking only Local candidates as final interview will be F2F. Salary: 8 LPA - 13 LPA Mandatory Skills: Python + AWS (EKS & ECS) + Flask + CI/CD What You'll Do: Design & build backend components of our MLOps platform in Python on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What You Know: At least 3+ years of professional backend development experience with Python. Experience with web development frameworks such as Flask or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with containers (Docker) and container platforms like AWS ECS or AWS EKS Experience with unit and functional testing frameworks. Experience with public cloud platforms like AWS. Experience with CI/CD practices, tools, and frameworks. If have a plan to switch the job and having the same skills & Exp, kindly share your updated CV ASAP. Regards Nitin Kr Gupta Email: nitin.gupta@heylushr.com Heylus HR Pvt Ltd, Noida
Posted 3 months ago
3 - 8 years
5 - 11 Lacs
Bengaluru
Work from Office
Job Title IT Delivery Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Responsible for day-to-day operations and leading a team of server, storage and backup admins. Work with customer leads, application owners, and be the technical SPOC for client reporting and escalation. Oversee software upgrades and management of infrastructure in a multi-vendor environment. Work with vendors to POC new products and enhance capabilities of existing platforms. Prepare, present operations SLAs, tasks, and project updates to the customer. Support multi-vendor, globally distributed infrastructure environment. Advise the customer about performance management, availability management, configuration management and reporting. Solution preparation for migrating different workloads on-to standardized platforms, and consolidation particularly for remote offices. SLA reporting, capacity trending and capacity planning for large, distributed environment running in Petabytes, and spread across globally. Preferred Skills: Storage Technology->Backup Administration->Backup Technologies->Veritas backup Technology->Backup Administration->IBM TSM Technology->Storage Administration->EMC Technology->Storage Administration->NetApp Technology->Backup Administration->Commvault Additional Responsibilities: Preferred skills:The candidate should have storage & backup background. Storage Technologies:Nasuni, Dell ECS, Dell Powerscale. Backup:Cohesity Educational Requirements Bachelor of Engineering,Bachelor Of Technology Service Line Cloud & Infrastructure Services* Location of posting is subject to business requirements CLICK TO PROCEED
Posted 3 months ago
6 - 7 years
16 - 20 Lacs
Bengaluru
Work from Office
Role & responsibilities : Maintaining the stability and security of our applications hosted in AWS Developing and maintaining CDK scripts An initial task is to safely migrate an existing legacy environment to CDK Working with and mentoring the junior DevOps who has experience of our systems Working independently with a remote team Project management A high level of English and good communication skills are required Deep experience of the following in AWS is required: Security Scalable architecture Docker / containers Networking Monitoring tools Performance analysis Experience with or familiarity of the following tools and technologies: CDK Golang is highly desirable ECS Fargate Lambda RDS Aurora / PostreSQL EC2 S3 IAM Route 53 SMS/SNS/SES CloudWatch
Posted 3 months ago
10 - 15 years
37 - 45 Lacs
Bengaluru
Work from Office
Experience: Minimum of 10+ years in database development and management roles. SQL Mastery: Advanced expertise in crafting and optimizing complex SQL queries and scripts. AWS Redshift: Proven experience in managing, tuning, and optimizing large-scale Redshift clusters. PostgreSQL: Deep understanding of PostgreSQL, including query planning, indexing strategies, and advanced tuning techniques. Data Pipelines: Extensive experience in ETL development and integrating data from multiple sources into cloud environments. Cloud Proficiency: Strong experience with AWS services like ECS, S3, KMS, Lambda, Glue, and IAM. Data Modeling: Comprehensive knowledge of data modeling techniques for both OLAP and OLTP systems. Scripting: Proficiency in Python, C#, or other scripting languages for automation and data manipulation. Preferred Qualifications Leadership: Prior experience in leading database or data engineering teams. Data Visualization: Familiarity with reporting and visualization tools like Tableau, Power BI, or Looker. DevOps: Knowledge of CI/CD pipelines, infrastructure as code (e.g., Terraform), and version control (Git). Certifications: Any relevant certifications (e.g., AWS Certified Solutions Architect, AWS Certified Database - Specialty, PostgreSQL Certified Professional) will be a plus. Azure Databricks: Familiarity with Azure Databricks for data engineering and analytics workflows will be a significant advantage. Soft Skills Strong problem-solving and analytical capabilities. Exceptional communication skills for collaboration with technical and non-technical stakeholders. A results-driven mindset with the ability to work independently or lead within a team. Qualification: Bachelor's or masters degree in Computer Science, Information Systems, Engineering or equivalent. 10+ years of experience
Posted 3 months ago
10 - 15 years
37 - 45 Lacs
Bengaluru
Work from Office
Bachelors degree in Computer Science/Information Technology, or in a related technical field or equivalent technology experience. 10+ years experience in software development 8+ years of experience in DevOps Experience with the following Cloud Native tools: Git, Jenkins, Grafana, Prometheus, Ansible, Artifactory, Vault, Splunk, Consul, Terraform, Kubernetes Working knowledge of Containers, i.e., Docker Kubernetes, ideally with experiencetransitioning an organization through its adoption Demonstrable experience with configuration, orchestration, and automation tools suchas Jenkins, Puppet, Ansible, Maven, and Ant to provide full stack integration Strong working knowledge of enterprise platforms, tools and principles including WebServices, Load Balancers, Shell Scripting, Authentication, IT Security, and PerformanceTuning Demonstrated understanding of system resiliency, redundancy, failovers, and disasterrecovery Experience working with a variety of vendor APIs including cloud, physical and logicalinfrastructure devices Strong working knowledge of Cloud offerings & Cloud DevOps Services (EC2, ECS, IAM, Lambda, Cloud services, AWS CodeBuild, CodeDeploy, Code Pipeline etc or Azure DevOps, API management, PaaS) Experience managing and deploying Infrastructure as Code, using tools like Terraform Helm charts etc. Manage and maintain standards for Devops tools used by the team
Posted 3 months ago
3 - 7 years
11 - 15 Lacs
Noida
Work from Office
Proficiency in Python, TensorFlow, PyTorch, or similar AI frameworks Experience in ML projects deployed on Cloud (Preferably AWS) (minimum 6 yrs experience) Exposure to contribute in Architecting / solutioning AI/ML projects/use cases Cloud experience: Lambda, API gateway, ECS, RDS, Cloud Front, OpenSearch (good to have) Recent 12 months exposure to generative models ( llama, entropic, titan, OpenAI - atleast any two) Experience in natural language processing (NLP), entity extraction, classification problem Knowledge of data management, processing pipelines, and working with large-scale datasets Strong problem-solving abilities, communication skills, and the ability to collaborate with technical and non-technical stakeholders Knowledge of Glue, Sagemaker, bedrock, AWS CDK - nice to have Mandatory Competencies Cloud - AWS Data Science - Machine Learning (ML) ETL - AWS Glue Data Science - AI Python - Python At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, were committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
About The Role : We are seeking an experienced and skilled DevOps Engineer with a strong focus on Jenkins and AWS to join our dynamic team. The ideal candidate will have deep hands-on experience managing Jenkins environments, AWS infrastructure, and supporting DevOps processes through automation, CI/CD pipelines, and infrastructure-as-code practices. This role requires a strong understanding of cloud technologies, containerization, and configuration management, along with excellent problem-solving abilities and strong leadership skills.Ensures the efficient planning, provisioning, installation/configuration, maintenance, and/or operations of the hardware and software infrastructure required to build, validate, and release a wide variety of hardware and software products and projects. Works closely with development and quality teams to derive infrastructure design requirements, build, test, and automate tools appropriate to the project, and/or implements and maintains of those systems within the constraints imposed by Intel enterprise infrastructure (IT) and other governing bodies. Owns the end-to-end delivery pipeline, including source code management, versioning/tagging strategy, component build and packaging, test automation tooling, release staging, acceptance and/or indicators, required security and IP scans, any third party conformance tools, artifact storage and distribution, and disaster recovery planning. Identifies opportunities and implements solutions for increased automation, reliability, and/or velocity within the pipeline through implementation of robust infrastructure telemetry, KPIs, and indicators, and by monitoring and applying industry best practices. Qualifications Key Responsibilities:* Jenkins Management:Oversee the maintenance of Jenkins servers, including updates, plugin management, and cloud node configurations.Design, implement, and maintain complex Jenkins pipelines and jobs.Deep understanding and hands-on in troubleshooting and resolving issues by debugging pipelines, identifying root causes, and implementing fixes.Manage Jenkins configurations using Jenkins as Code to ensure scalability and consistency.* AWS Infrastructure Management:Manage AWS services, including EC2 instances, AMI images, Load Balancers, and AutoScaling Groups to ensure a robust DevOps environment.Configure and manage EFS (Elastic File Systems) and perform backup and restore operations.Oversee the use of Docker images, ECR (Elastic Container Registry), ECS (Elastic Container Service), and Kubernetes for container orchestration.Implement and manage security using AWS Secrets Manager, IAM, and VPC.Automate infrastructure management with CloudFormation.* Source Control and Artifact Management:Manage Git/Gerrit repositories to ensure seamless collaboration and code integration.Oversee the usage of JFrog Artifactory for efficient artifact storage and distribution.* Automation and Scripting:Use Python scripting to automate DevOps tasks, monitor systems, and manage deployments.* Leadership and Communication:Lead and guide teams in achieving both short-term and long-term DevOps goals, driving efficiency and improvements.Communicate clearly with team members and other stakeholders, ensuring transparency in process and progress.Required Skills and Qualifications:* 10+ years of hands-on DevOPs experience* 5+ years of experience in a lead role managing a set of DevOps engineers* Proven experience managing Jenkins servers, complex pipelines, and Jenkins configuration as Code. * In-depth experience with AWS cloud infrastructure and services such as EC2, AMI, EFS, ECS, ECR, VPC, CloudFormation, IAM, and Kubernetes.* Strong understanding of Docker and containerization practices.* Experience in managing Git/Gerrit repos and using JFrog Artifactory for artifact management.* Proficiency in Python for scripting automation tasks.* Strong troubleshooting and debugging skills with the ability to identify and resolve complex issues.* Ability to work collaboratively in a fast-paced environment, providing leadership and guidance to achieve team objectives.* Excellent communication skills, both written and verbal, with the ability to effectively communicate with cross-functional teams.* Certifications in AWS (e.g., AWS Certified Solutions Architect) or relevant DevOps tools.* Familiarity with other DevOps tools like Terraform, Ansible, or Chef is a plus. Inside this Business Group The Client Computing Group (CCG) is responsible for driving business strategy and product development for Intel's PC products and platforms, spanning form factors such as notebooks, desktops, 2 in 1s, all in ones. Working with our partners across the industry, we intend to deliver purposeful computing experiences that unlock people's potential - allowing each person use our products to focus, create and connect in ways that matter most to them. As the largest business unit at Intel, CCG is investing more heavily in the PC, ramping its capabilities even more aggressively, and designing the PC experience even more deliberately, including delivering a predictable cadence of leadership products. As a result, we are able to fuel innovation across Intel, providing an important source of IP and scale, as well as help the company deliver on its purpose of enriching the lives of every person on earth.
Posted 3 months ago
6 - 11 years
8 - 14 Lacs
Bengaluru
Work from Office
About The Role : Hands-on experience with Dell EMC Storage products (PMAX, UNITY, ECS, ISILON) Hands-on experience with NetApp NAS Strong understanding of storage replication Experience with migration and decommissioning projects will be an added advantage. Primary Skills Dell EMC Storage products (PMAX, UNITY, ECS, ISILON) NetApp NAS
Posted 3 months ago
10 - 14 years
12 - 16 Lacs
Pune
Work from Office
Client expectation apart from JD Longer AWS data engineering experience (glue, spark, ECR ECS docker), python, pyspark, hudi/iceberg/Terraform, Kafka. Java in early career would be a great addition but not a prio. (for the OOP part and java connectors).
Posted 3 months ago
5 - 10 years
6 - 11 Lacs
Bengaluru
Work from Office
Job Title: Integration Engineer Job Description: We are seeking a highly skilled Integration Engineer with expertise in MuleSoft and Camel frameworks, along with the ability to build containerized applications on AWS using Java. As an Integration Engineer, you will play a crucial role in designing, developing, and maintaining integration solutions that connect various systems and applications. Responsibilities: Design, develop, and implement integration solutions using MuleSoft and Camel frameworks. Collaborate with cross-functional teams to gather integration requirements and translate them into technical designs. Build and configure integration flows using MuleSoft/Camel, APIs, and connectors to enable seamless data exchange between systems. Develop custom components, transformers, and adapters to meet specific integration needs. Troubleshoot and resolve issues related to integration processes, ensuring high system availability and performance. Containerise applications using Docker and deploy them on AWS ECS or EKS. Utilise AWS services such as EC2, S3, RDS, and Lambda to build scalable and reliable applications. Build pipelines to automate deployment processes and ensure continuous integration and delivery. Conduct code reviews, perform unit testing, and ensure adherence to coding standards and best practices. Stay updated with the latest industry trends and advancements in integration technologies, AWS services, and containerisation techniques. Requirements: Bachelors degree in Computer Science, Engineering, or a related field. Proven experience as an Integration Engineer, preferably with expertise in MuleSoft and Camel frameworks. Strong knowledge of Java programming language and experience with building Java-based applications. Proficiency in designing and implementing integration solutions using MuleSoft Any point Platform or Apache Camel. Familiarity with AWS services and experience in deploying containerised applications on AWS ECS or EKS. Solid understanding of containerisation concepts and hands-on experience with Docker. Experience with CI/CD pipelines and tools like Jenkins or GitLab CI. Strong problem-solving skills and ability to troubleshoot complex integration issues. Excellent communication and collaboration skills to work effectively with cross-functional teams. Ability to adapt to changing priorities and work in a fast-paced environment. Preferred Qualifications: MuleSoft Certified Developer or Architect certification. Experience with other integration platforms such as Apache Kafka. Knowledge of micro services architecture and RESTful API design principles. Familiarity with infrastructure-as-code tools like Terraform or AWS CloudFormation.
Posted 3 months ago
12 - 17 years
14 - 19 Lacs
Mumbai
Hybrid
Design, develop, and deploy high-quality Java-based microservices. Write clean, testable, and efficient code. Utilize Spring Boot and related frameworks. Work with AWS services (e.g., Lambda, ECS, S3, API Gateway). Proficiency in RESTful API design. Required Candidate profile We need at least 12+ years of experience with Java candidates well versed with AWS services Lambda, ECS, API Gateway, and S3, with exposure to the Payment/Cards domain.
Posted 3 months ago
12 - 17 years
14 - 19 Lacs
Bangalore Rural
Hybrid
Design, develop, and deploy high-quality Java-based microservices. Write clean, testable, and efficient code. Utilize Spring Boot and related frameworks. Work with AWS services (e.g., Lambda, ECS, S3, API Gateway). Proficiency in RESTful API design. Required Candidate profile We need at least 12+ years of experience with Java candidates well versed with AWS services Lambda, ECS, API Gateway, and S3, with exposure to the Payment/Cards domain.
Posted 3 months ago
12 - 17 years
14 - 19 Lacs
Pune
Hybrid
Design, develop, and deploy high-quality Java-based microservices. Write clean, testable, and efficient code. Utilize Spring Boot and related frameworks. Work with AWS services (e.g., Lambda, ECS, S3, API Gateway). Proficiency in RESTful API design. Required Candidate profile We need at least 12+ years of experience with Java candidates well versed with AWS services Lambda, ECS, API Gateway, and S3, with exposure to the Payment/Cards domain.
Posted 3 months ago
7 - 9 years
0 Lacs
Pune
Hybrid
In-depth knowledge of AWS services including EC2, S3, RDS, Lambda, ACM, SSM, and IAM. Experience with Kubernetes (EKS) and Elastic Container Services (ECS)for orchestration and deployment of microservices. Engineers are expected to be able to execute upgrades independently. Cloud Architecture : Proficient knowledge on AWS advanced networking services including CloudFront, Transit Gateway Monitoring & Logging: Knowledge of AWS CloudWatch, CloudTrail, OpenSearch and Grafana monitoring tools. Security Best Practices : Understanding of AWS security features and compliance standards. API: RestAPI/OneAPI Relevant experience mandatory Infrastructure as Code (IaC): Proficient in AWS CloudFormation and Terraform for automated provisioning. Scripting Languages: Proficient in common languages (PowerShell, Python and Bash) for automation tasks. CI/CD Pipelines: Familiar with tools like Azure DevOps Pipelines for automated testing and deployment. Relevant Experience - A minimum of 4-5 years experience in a comparable Cloud Engineer Role Nice to Have: Knowledge/Hands-On Azure services Agile Frameworks: Proficient knowledge about Agile ways of working (SCRUM, SAFe) Certification In case of AWS at least:: Certified Cloud Practitioner + Certified Solutions Architect Associate + Certified Solution Architect Professional. In case of Azure at least: Microsoft Certified: Azure Solutions Architect Expert Mindset: Platform engineers must focus on automating activities where possible, to ensure stability, reliability and predictability.
Posted 3 months ago
6 - 11 years
15 - 30 Lacs
Chennai, Pune, Bengaluru
Hybrid
Strong understanding of Global Payments Processing and E2E Payment workflows, SWIFT Cross boarder processing (MT & MX) & ISO 20022 Message Processing, Clearing & Settlement Systems process, Payments Interface data mapping and solution design,E2E imp Required Candidate profile Domestic clearing like NEFT, ECS, ACH, CHAPS / FEDWIRE / CHIPS / Target2, RTGS, RTP schemes like NPP/FAST/IMPS/InstaPay etc.,SEPA (Direct Debits, Credit Transfers, Mandate Management, Instant Payments
Posted 3 months ago
11 - 18 years
25 - 40 Lacs
Navi Mumbai, Mumbai (All Areas)
Work from Office
Role & responsibilities Key Responsibilities: Cloud Architecture & Design: Develop and implement cloud architecture strategies based on business requirements, security, and cost optimization best practices. Cross team collaboration to help optimise overall application and cloud architecture. AWS Services Management: Design and manage cloud environments using core AWS services such as EC2, S3, VPC, Lambda, RDS, IAM, CloudFormation, ECS/EKS, and others. Infrastructure as Code (IaC): Understanding of automation using Terraform, AWS CloudFormation, and AWS CDK to provision and manage cloud infrastructure. Security & Compliance: Ensure security best practices, IAM policies, encryption standards, and compliance requirements (e.g., ISO, SOC2, HIPAA) are met. Migration & Modernization: Lead cloud migration projects, including re-hosting, re-platforming, and re-architecting workloads for AWS. (i.e. 6Rs) Cost Optimization: Optimize cloud costs by leveraging AWS pricing models, Reserved Instances, Spot Instances, and rightsizing resources. DevOps & Automation: Work with DevOps teams to integrate CI/CD pipelines, containerization, serverless computing, and monitoring solutions. Disaster Recovery & High Availability: Design and implement multi-region, multi-account strategies for business continuity and failover. Collaboration & Leadership: Provide guidance to development, operations, and security teams on AWS best practices and cloud strategies. 5+ years of hands-on AWS cloud architecture experience in an enterprise environment. Expertise in AWS services, networking (VPC, Direct Connect, Transit Gateway), security (IAM, KMS, WAF), and automation (Terraform, CloudFormation) . Strong knowledge of Linux/Windows environments, scripting (Python, Bash), and containerization (Docker, Kubernetes, AWS ECS/EKS). Experience with monitoring and logging tools like AWS CloudWatch, ELK Stack, Prometheus, and New Relic. Strong understanding of cloud financial management (FinOps) principles. Excellent problem-solving, communication, and stakeholder management skills. Qualifications (Good to have ) Experience working in multi-cloud (AWS, Azure, GCP) or hybrid cloud environments. Knowledge of AWS AI/ML, IoT, and Data Analytics services. Familiarity with Zero Trust security models and cloud-native security solutions.
Posted 3 months ago
8 - 13 years
16 - 25 Lacs
Bengaluru
Work from Office
Foundational services: Amazon Elastic Compute Cloud (EC2), Amazon Virtual Private Cloud (VPC), Amazon Simple Storage Service (S3), and Amazon Elastic Block Store (EBS). o Strong notions of security best practices (e.g. using IAM Roles, KMS, etc.) o Strong Knowledge of writing infrastructure as code (IaC) using
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
SRE Engineer The SRE Engineer works with various areas of the business to collaborate on an infrastructure strategy that is secure, scalable, high performance and aligned with the goal of continuous integration and continuous deployment. This team member is dedicated to Security projects and is responsible for helping build the standards for infrastructure, deployment, and security implementations with a keen eye toward the future state of technology and the industry. He/She reports directly to the CISO, and works closely with Senior DevOps engineers and other Cloud Operation teams to build the frameworks that are adopted for future projects and processes, specifically as it relates to security. This team member is future-focused, capable of moving quickly and taking risks, and challenging the status quo. Responsibilities Analyze current technology utilized within the company and develop steps and processes to improve and expand upon them Provide clear goals for all areas of a project and develop steps to oversee their timely execution Work closely with development teams within the company to maintain hardware and software needed for projects to be completed efficiently Participate in a constant feedback loop among the community of Cloud Operation teams and enterprise architecture teams Work with software development teams to engineer and implement infrastructure solutions, including infrastructure automation and CI/CD pipelines Provide evangelism for cutting-edge, sustainable automation in continuously integrating and deploying to multiple environments Requirements Ability to build integrations between applications using an Application Programming Interface (API) 4 years of recent experience with cloud platforms such as AWS or Microsoft Azure (AWS preferred) Some recent experience with infrastructure as code. (Terraform, CloudFormation, or AWS CDK preferred) Demonstrate ability to leverage scripting languages such as PowerShell and Bash to automate processes. Other coding languages a plus Some software development experience preferred, including UI, database, and backend systems. General understanding of tools, applications and architectural patterns associated with CI/CD and cloud development Strong understanding of security tenets Ability to think analytically and advocate for creative solutions Ability to work collaboratively with members of other teams Excellent written and verbal communication skills Delivery experience as a software engineer for on-premises or cloud applications. Strong knowledge of AWS services including EC2, ECS, VPC, IAM, Control Tower, CloudFormation, Organizations, Systems Manager, AWS Backup, AWS Instance Scheduler, ELB, and RDS. Hands-on experience with AWS provisioning and infrastructure automation using Terraform and CloudFormation templates/stacks. Experience configuring and managing SSO roles in AWS. Proficient in Python, Shell, YAML, and PowerShell scripting. Extensive hands-on experience in automation and streamlining processes. Skilled in vulnerability remediation and security best practices. Experience with ECS repositories and GitHub for version control and CI/CD pipelines. Proficient in Windows and Linux server administration. Experience with Active Directory (AD) and RBAC (Role-Based Access Control) roles. Hands-on experience with monitoring tools like CloudWatch, Nagios, Observium, and Kibana Logs. Good knowledge of VMware and virtual machines. Experience in OS patching, upgrades, and configuring Stunnel or site-to-site VPN tunnels. Strong understanding of change management processes, SLAs, and tools like Jira (Kanban Boards and Sprints). Knowledge of implementing High Availability (HA), Fault Tolerance (FT), and Disaster Recovery (DR) strategies in the cloud.
Posted 3 months ago
3 - 8 years
5 - 10 Lacs
Pune
Work from Office
Project Role : Technology Delivery Lead Project Role Description : Manages the delivery of large, complex technology projects using appropriate frameworks and collaborating with sponsors to manage scope and risk. Drives profitability and continued success by managing service quality and cost and leading delivery. Proactively support sales through innovative solutions and delivery excellence. Must have skills : SAP SuccessFactors Employee Central Good to have skills : SAP SuccessFactors Employee Central Time Off Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Technology Delivery Lead, you will manage the delivery of large, complex technology projects using appropriate frameworks and collaborating with sponsors to manage scope and risk. You will drive profitability and continued success by managing service quality and cost and leading delivery. Proactively support sales through innovative solutions and delivery excellence. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Lead and oversee the successful delivery of technology projects. Collaborate with sponsors to manage project scope and mitigate risks. Ensure service quality and cost management for project success. Proactively support sales initiatives with innovative solutions. Provide leadership in delivery excellence. Professional & Technical Skills: Must To Have Skills:Proficiency in SAP SuccessFactors Employee Central. Strong understanding of SAP SuccessFactors Employee Central Time Off. Experience in project management and delivery of technology projects. Knowledge of frameworks for managing large technology projects. Excellent communication and leadership skills. Additional Information: The candidate should have a minimum of 3 years of experience in SAP SuccessFactors Employee Central. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 3 months ago
5 - 7 years
20 - 25 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
Responsibilities: Design and Development: Develop robust, scalable, and maintainable backend services using Python frameworks like Django, Flask, and FastAPI. Cloud Infrastructure: Work with AWS services (e.g., Cloudwatch, S3, RDS, Neptune, Lambda, ECS) to deploy, manage, and optimize our cloud infrastructure. Software Architecture:? Participate in defining and implementing software architecture best practices, including design patterns, coding standards, and testing methodologies. Database Management:? Proficiently work with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune) to design and optimize data models and queries.? Experience with ORM tools. Automation: Design, develop, and maintain automation scripts (primarily in Python) for various tasks, including: Data updates and processing. Scheduling cron jobs. Integrating with communication platforms like Slack and Microsoft Teams for notifications and updates. Implementing business logic through automated scripts. Monitoring and Logging: Implement and manage monitoring and logging solutions using tools like ELK stack (Elasticsearch, Logstash, Kibana) and AWS CloudWatch. Production Support:? Participate in on-call rotations and provide support for production systems, troubleshooting issues and implementing fixes.? Proactively identify and address potential production issues. Team Leadership and Mentorship: Lead and mentor junior backend developers, providing technical guidance, code reviews, and support their professional growth. Required Skills and Experience: 5+ years of experience in backend software development. Strong proficiency in Python and at least two of the following frameworks: Django, Flask, FastAPI. Hands-on experience with AWS cloud services, including ECS. Experience with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune). Strong experience with monitoring and logging tools, specifically ELK stack and AWS CloudWatch. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Work Timings: 2:30PM-11:30PM(Monday-Friday)
Posted 3 months ago
7 - 9 years
10 - 12 Lacs
Hyderabad
Work from Office
5+ Yrs.Harness is a must.Good Soft Skills. Foundational skill Network DNS, CICD, Jenkis and Harness, Terraform(with out we cannot move). They have tounderstand routing, networking works. They should understand containers and how it moves in Kubernetes. AWS EKS, Lambda function, ECS, Python scripts(good to know). GCP GKE, batch processing, API gateway, Secret management, Secure infra management, Postgre, SQL(Good to know). Support multiple applications
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2