Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 12.0 years
11 - 16 Lacs
Hyderabad
Work from Office
Overview Responsible for infrastructure engineering devops task for PepsiCo e-commerce. The person needs to also lead the 4 member capability in India Responsibilities Deploy infrastructure in Azure & AWS cloud using terraform and Infra-as-code best practices. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Ensure the highest possible uptime for our Kubernetes based developer productivity platforms. Partner with development teams to recommend best practices for application uptime and recommend best practices for cloud native infrastructure architecture. Collaborate in infra & application architecture discussions decision making that is part of continually improving and expanding these platforms. Automate everything. Focus on creating tools that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Collaborate with application development teams to perform RCA. Implement and manage threat detection protocols, processes and systems. Conduct regular vulnerability assessments and ensure timely remediation of flagged incidents. Ensure compliance with internal security policies and external regulations like PCI. Lead the integration of security tools such as Wiz, Snyk, DataDog and others within the Pepsico infrastructure. Coordinate with PepsiCo's broader security teams to align Digital Commerce security practices with corporate standards. Provide security expertise and support to various teams within the organization. Advocate and enforce security best practices, such as RBAC and the principle of least privilege. Continuously review, improve and document security policies and procedures. Participate in on-call rotation to support our NOC and incident management teams. Qualifications BSc/MSc in computer science, software engineering or related field is a plus, alternatively completion of a devOps or Infrastructure training course or bootcamp is acceptable as well. 8+ year of Kubernetes, ideally running workloads in a production environment on AKS or EKS platforms. 4+ year of creating Ci/CD pipelines in any templatized format in Github, Gitlab or Azure ADO. 3+ year of Python, bash and any other OOP language. (Please be prepared for coding assessment in your language of choice.) 5+ years of experience deploying infrastructure to Azure platforms. 3+ year of experience with using terraform or writing terraform modules. 3+ year of experience with Git, Gitlab or GitHub. 2+ year experience as SRE or supporting micro services in containerized environment like Nomad, docker swarn or K8s. Kubernetes certifications like KCNA, KCSA, CKA, CKAD or CKS preferred Good understanding of software development lifecycle. Familiarity with: Site Reliability Engineering, AWS, Azure, or similar cloud platforms, Automated build process and tools ,Service Mesh like Istio, linkerd,,Monitoring tools like Datadog, Splunk etc. Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Current skills in following technologies: Kubernetes, Terraform, AWS or Azure (Azure Preferred). GitHub Actions or Gitlab workflow. Familiar with Agile processes and tools such as Jira; good to have experience being part of Agile teams, continuous integration, automated testing, and test-driven development
Posted 1 week ago
5.0 - 8.0 years
15 - 25 Lacs
Pune
Hybrid
So, what’s the role all about? We are seeking a skilled and experienced Developer with expertise in .net programming along with knowledge on LLM and AI to join our dynamic team. As a Contact Center Developer, you will be responsible for developing and maintaining contact center applications, with a specific focus on AI functionality. Your role will involve designing and implementing robust and scalable AI solutions, ensuring efficient agent experience. You will collaborate closely with cross-functional teams, including software developers, system architects, and managers, to deliver cutting-edge solutions that enhance our contact center experience. How will you make an impact? Develop, enhance, and maintain contact center applications with an emphasis on copilot functionality. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform system analysis, troubleshooting, and debugging to identify and resolve issues. Conduct regular performance monitoring and optimization of code to ensure optimal customer experiences. Maintain documentation, including technical specifications, system designs, and user manuals. Stay up to date with industry trends and emerging technologies in contact center, AI, LLM and .Net development and apply them to enhance our systems. Participate in code reviews and provide constructive feedback to ensure high-quality code standards. Deliver high quality, sustainable, maintainable code. Participate in reviewing design and code (pull requests) for other team members – again with a secure code focus. Work as a member of an agile team responsible for product development and delivery Adhere to agile development principles while following and improving all aspects of the scrum process. Follow established department procedures, policies, and processes. Adheres to the company Code of Ethics and CXone policies and procedures. Excellent English and experience in working in international teams are required. Have you got what it takes? BS or MS in Computer Science or related degree 5-8 years’ experience in software development. Strong knowledge of working and developing Microservices. Design, develop, and maintain scalable .NET applications specifically tailored for contact center copilot solutions using LLM technologies. Good understanding of .Net and design patterns and experience in implementing the same Experience in developing with REST API Integrate various components including LLM tools, APIs, and third-party services within the .NET framework to enhance functionality and performance. Implement efficient database structures and queries (SQL/NoSQL) to support high-volume data processing and real-time decision-making capabilities. Utilize Redis for caching frequently accessed data and optimizing query performance, ensuring scalable and responsive application behavior. Identify and resolve performance bottlenecks through code refactoring, query optimization, and system architecture improvements. Conduct thorough unit testing and debugging of applications to ensure reliability, scalability, and compliance with specified requirements. Utilize Git or similar version control systems to manage source code and coordinate with team members on collaborative projects. Experience with Docker/Kubernetes is a must. Experience with cloud service provider - Amazon Web Services (AWS) is must. Experience with AWS Could on any technology (preferred are Kafka, EKS, Kubernetes) Experience with Continuous Integration workflow and tooling. Stay updated with industry trends, emerging technologies, and best practices in .NET development and LLM applications to drive innovation and efficiency within the team. You will have an advantage if you also have: Strong communication skills Experience with cloud service provider like Amazon Web Services (AWS), Google Cloud Engine, Azure or equivalent Cloud provider is a must. Experience with ReactJS. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7443 Reporting into: Sandip Bhattcharjee Role Type: Individual Contributor
Posted 1 week ago
6.0 - 11.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Overview Cigna International Health is initiating a project to modernise its portal and self-service application to bolster the expansion of our health businesses across the globe. We’re actively seeking accomplished leaders to champion our vision and steer us towards building a mobile platform for serving Cigna’s customers all over the world. We are seeking an experienced Software Engineer to drive our front-end software development efforts in creating high-quality web and mobile solutions. The ideal candidate will engineer technical solutions, produce clean code, and ensure successful delivery of software solutions aligned with business goals. Responsibilities Technical Leadership: Provide direction and be responsible for the output of frontend discipline within the application development. Implement software engineering strategy, ensuring that it aligns with the overall business and product objectives. Own the frontend application development capability for our web portal solution aligned with the product vision as defined by the solution product owner. Contribute to the definition of applications development policies, standards, and procedures. Mentoring: Lead and mentor junior software development team members, fostering a culture of innovation, automation, collaboration, and excellence. Take active part in career development and performance of junior software development team members. Project Delivery: Execute software projects, ensuring they are delivered on time, within budget, and meet quality standards. Develop solutions using TDD methodology. Execute project plans and application designs to ensure projects are aligned with standards and IT strategy. Architecture and DevelopmentGuide the design principles, and development processes to ensure scalable, secure, and efficient solutions, collaborating with other senior leads. Operational EfficiencyImplement DevSecOps to streamline processes, tools, and workflows to optimize engineering operations and enhance productivity. ExperienceProven experience (6 years) in a senior role within software development for web portals and user interfaces, with a strong technical background. Technical AcumenExtensive knowledge of software development methodologies, source code management strategies, design patterns, DevOps, automation, and best practices. Ability to translate non-functional requirements such as availability, flexibility, stability, ease of maintenance and security. Technologies coveredStrong experience in implementing software using ReactJS framework, TypeScript, web servers, relational and non-relational databases , testing strategies. Experience with a cloud platform such as AWS and the services available in there to build and host the applications. Key services S3, Lambda, CloudFront, API Gateway, DynamoDB / RDS, IAM, KMS . Experience with ECS/EKS, Docker and Kubernetes are an advantage. Experience with building Infrastructure as Code using Terraform or CloudFormation is an advantage. Experience with building and deploying application code and configured CI/CD pipelines using tools such as Jenkins, GitHub Actions, GitLab CI, Bamboo CI. Experience with working in agile teams and understood the concepts of iterative delivery, fail-early & fail-fast, continued improvements. Leadership Skills: Good leadership, mentoring, and communication skills to guide and inspire junior technical team members. EducationBachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. OptionalGlobal and regulatory landscapesUnderstanding and experience of working practices across multiple geographies. Experience with regional nuances such as tax rules, regulatory interfaces, multi-currency, multi-language etc is an advantage. Aware of the concrete effects of architectural decisions – specifically microservice architecture – at the code level, in collaboration with other team members. Desirable Experience of using Jira About The Cigna Group Cigna Healthcare, a division of The Cigna Group, is an advocate for better health through every stage of life. We guide our customers through the health care system, empowering them with the information and insight they need to make the best choices for improving their health and vitality. Join us in driving growth and improving lives.
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 08 One of the most valuable asset in today's Financial industry is the data which can provide businesses the intelligence essential to making business and financial decisions with conviction. This role will provide an opportunity to you to work on Ratings and Research related data. You will get an opportunity to work on cutting edge big data technologies and will be responsible for development of both Data feeds as well as API work. Location: Hyderabad The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. Impact: As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. Whats in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 3--6 years of experience in application development. Minimum of 2 years of hands-on experience with Scala. Minimum of 2 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance.
Posted 1 week ago
4.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
RolePricing General :Knowledge of mathematics, probability, statistics and Commercial Insurance business.Knowledge of Insurance process(es) and pricing methodologies (conventional and Innovative approaches)Mandatory :Work experience in Pricing for commercial insurance risks (eg. property LOB)Work experience on pricing modelling using Exposure and Experience rating methodologies.Should have in-depth knowledge in pricing methodologies using EMBLEM and RADAR tools.Should have built pricing tools/raters on Excel.Should have knowledge on Frequency, Severity modelling and Loss cost modelling.Additional good to have requirements:Knowledge of actuarial tools (EMBLEM/RADAR), data mining tools like SQL/R/Python, automation using VBA macrosKnowledge of ST-8 General Insurance Pricing Actuarial science would be preferableWork experience on GLM modelling - Frequency and Severity Work experience on Impact Analysis using Radar tool would be an added advantage.Screening parameters:Work experience in General Insurance PricingKnowledge on Pricing - Exposure & Experience rating techniques (familiar with key concepts like LDF, ILF, loss curves)Cleared or appeared for Actuarial Science exam- ST-8General Insurance PricingModelling (Frequency & Severity modelling) using- Emblem/R/SAS/Python
Posted 1 week ago
5.0 - 10.0 years
30 - 45 Lacs
Hyderabad, India
Hybrid
Department: Development Operations (DevOps) Employment Type: Full Time Location: India Reporting To: Seenukumar Gurunatha Description At Vitech, we believe in the power of technology to simplify complex business processes. Our mission is to bring better software solutions to market, addressing the intricacies of the insurance and retirement industries. We combine deep domain expertise with the latest technological advancements to deliver innovative, user-centric solutions that future-proof and empower our clients to thrive in an ever-changing landscape. With over 1,600 talented professionals on our team, our innovative solutions are recognized by industry leaders like Gartner, Celent, Aite-Novarica, and ISG. We offer a competitive compensation package along with comprehensive benefits that support your health, well-being, and financial security. Senior Site Reliability Engineer (SRE) Location: Hyderabad (Hybrid Role) Senior Site Reliability Engineer (SRE) – Join Our Global Engineering Team At Vitech we believe that excellence in production systems starts with engineering-driven solutions to operational challenges. Our Site Reliability Engineering (SRE) team is at the heart of ensuring seamless performance for our clients, preventing potential outages, and proactively identifying and resolving issues before they arise. Our SRE team is a diverse group of talented engineers across India, the US, and Canada. We have T-shaped expertise spanning application development, database management, networking, and system administration across both on-premise environments and AWS cloud. Together, we support mission-critical client environments and drive automation to reduce manual toil, freeing our team to focus on innovation. About the Role: Senior SRE As a SRE, you’ll be a key player in revolutionizing how we operate production systems for single and multi-tenant environments. You'll support SRE initiatives, support production, and drive infrastructure automation. Working in an Agile team environment, you’ll have the opportunity to explore and implement the latest technologies, engage in on-call duties, and contribute to continuous learning as part of an ever-evolving tech landscape. If you’re passionate about scalability, reliability, security and automation of business critical infrastructure, this role is for you. What you will do: Own and manage our AWS cloud-based technology stack, using native AWS services and top-tier SRE tools to support multiple client environments with Java-based applications and microservices architecture. Design, deploy, and manage AWS Aurora PostgreSQL clusters for high availability and scalability. Optimize SQL queries, indexes, and database parameters for performance tuning. Automate database operations using Terraform, Ansible, AWS Lambda, and AWS CLI. Manage Aurora’s read replicas, auto-scaling, and failover mechanisms. Enhance infrastructure as code (IAC) patterns using technologies like Terraform, CloudFormation, Ansible, Python, and SDK. Collaborate with DevOps teams to integrate Aurora with CI/CD pipelines. Provide full-stack support, as per assigned schedule, on applications across technologies such as Oracle WebLogic, AWS Aurora PostgreSQL, Oracle Database, Apache Tomcat, AWS Elastic Beanstalk, Docker/ECS, EC2, S3, etc., Troubleshoot database incidents, perform root cause analysis, and implement preventive measures. Document database architecture, configurations, and operational procedures. Ensure high availability, scalability, and performance of PostgreSQL databases on AWS Aurora. Monitor database health, troubleshoot issues, and perform root cause analysis for incidents. Embrace SRE principles such as Chaos Engineering, Reliability, Reducing Toil, etc., What We're Looking For: Proven hands-on experience as an SRE for critical, client-facing applications, with the ability to dive deep into daily SRE tasks, manage incidents, and oversee operational tools. 4+ years of experience in managing relational databases (Oracle, and/or PostgreSQL) in both cloud and on-prem environments, including SRE tasks like backup/restore, Performance issues and replication (primary skill required for this role) 3+ years of experience hosting enterprise applications in AWS (EC2, EBS, ECS/EKS, Elastic Beanstalk, RDS, CloudWatch). Strong understanding of AWS networking concepts (VPC, VPN/DX/Endpoints, Route53, CloudFront, Load Balancers, WAF). Familiarity with tools like pgAdmin, psql, or other database management utilities. Automate routine database maintenance tasks (e.g., vacuuming, reindexing, patching). Knowledge of backup and recovery strategies (e.g., pg_dump, PITR). Automate routine database maintenance tasks (e.g., vacuuming, reindexing, patching). Set up and maintain monitoring and alerting systems for database performance and availability (e.g., CloudWatch, Honeycomb, New Relic, Dynatrace etc.,). Work closely with development teams to optimize database schemas, queries, and application performance. Provide database support during application deployments and migrations. Hands-on experience with web/application layers (Oracle WebLogic, Apache Tomcat, AWS Elastic Beanstalk, SSL certificates, S3 buckets). Experience with containerized applications (Docker, Kubernetes, ECS). Leverage AWS Aurora features (e.g., read replicas, auto-scaling, multi-region deployments) to enhance database performance and reliability. Automation experience with Infrastructure as Code (Terraform, CloudFormation, Python, Jenkins, GitHub/Actions). Knowledge of multi-region Aurora Global Databases for disaster recovery. Scripting experience in Python, Bash, Java, JavaScript, Node.js. Excellent written/verbal communication, critical thinking. Willingness to work in shifts and assist your team to resolve issues efficiently. Join Us at Vitech! At Vitech, we believe in empowering our teams to drive innovation through technology. If you thrive in a dynamic environment and are eager to drive innovation in SRE practices, we want to hear from you! You’ll be part of a forward-thinking team that values collaboration, innovation, and continuous improvement. We provide a supportive and inclusive environment where you can grow as a leader while helping shape the future of our organization.
Posted 1 week ago
5.0 - 8.0 years
13 - 17 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1862_JOB Date Opened 13/04/2023 Industry Technology Job Type Work Experience 5-8 years Job Title DevOps Architect / Consultant City Pune Province Maharashtra Country India Postal Code 411038 Number of Positions 4 Design Containerized & cloud-native Micro services Architecture Plan & Deploy Modern Application Platforms & Cloud Native Platforms Good understanding of AGILE process & methodology Plan & Implement Solutions & best practices for Process Automation, Security, Alerting & Monitoring, and Availability solutions Should have good understanding of Infrastructure-as-code deployments Plan & design CI/CD pipelines across multiple environments Support and work alongside a cross-functional engineering team on the latest technologies Iterate on best practices to increase the quality & velocity of deployments Sustain and improve the process of knowledge sharing throughout the engineering team Keep updated on modern technologies & trends, and advocate the benefits Should possess good team management skills Ability to drive goals / milestones, while valuing & maintaining a strong attention to detail Excellent Judgement, Analytical & problem-solving skills Excellent in communication skills Experience maintaining and deploying highly-available, fault-tolerant systems at scale Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc) Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins, TravisCI) Experience with configuration management tools (e.g. Ansible, Chef) Experience with infrastructure-as-code (e.g. Terraform, Cloud formation) Expertise with AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda) Container Registry Solutions (Harbor, JFrog, Quay etc) Operational (e.g. HA/Backups) NoSQL experience (e.g. Cassandra, MongoDB, Redis) Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios Load Balancer Knowledge (AVI Networks, NGINX) Location: Pune / Mumbai [Work from Office] check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1876_JOB Date Opened 14/04/2023 Industry Technology Job Type Work Experience 3-5 years Job Title Sr DevOps Engineer City Mumbai Province Maharashtra Country India Postal Code 400008 Number of Positions 10 Practical experience with containerization and clustering (Kubernetes/OpenShift/Rancher/Tanzu/GKE/AKS/EKS etc). Version control system experience (e.g. Git, SVN) Experience implementing CI/CD (e.g. Jenkins). Experience with configuration management tools (e.g. Ansible, Chef) . Container Registry Solutions (Harbor, JFrog, Quay etc) . Good understanding on Kubernetes Networking & Security best practices Monitoring Tools like DataDog, or any other open source tool like Prometheus, Nagios, ELK. Mandatory Skills: Hands on Exp on Kubernetes and Kubernete Networking. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_1673_JOB Date Opened 20/12/2022 Industry Technology Job Type Work Experience 3-5 years Job Title Senior DevOps Engineer City Hyderabad Province Telangana Country India Postal Code 500001 Number of Positions 4 Roles & Responsibilities: 3+ years of working experience in data engineering. Hands-on keyboard' AWS implementation experience across a broad range of AWS services. Must have in depth AWS development experience (Containerization - Docker, Amazon EKS, Lambda, EC2, S3, Amazon DocumentDB, PostgreSQL) Strong knowledge of DevOps and CI/CD pipeline (GitHub, Jenkins, Artifactory) Scripting capability and the ability to develop AWS environments as code Hands-on AWS experience with at least 1 implementation (preferred in an Enterprise scale environment) Experience with core AWS platform architecture, including areas such asOrganizations, Account Design, VPC, Subnet, segmentation strategies. Backup and Disaster Recovery approach and design Environment and application automation CloudFormation and third-party automation approach/strategy Network connectivity, Direct Connect and VPN AWS Cost Management and Optimization Skilled experience in Python libraries (NumPy, Pandas dataframe) check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
3 - 7 Lacs
Coimbatore
Work from Office
Job Information Job Opening ID ZR_2228_JOB Date Opened 20/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title Cloud Developer City Coimbatore Province Tamil Nadu Country India Postal Code 638103 Number of Positions 4 Cloud Skills: AWSCompute, Networking, Security, EC2, S3, IAM, VPC, LAMBDA, RDS, ECS, EKS, CLOUDWATCH, LOAD BALANCERS,Autoscaling, CloudFront, Route53, Security Groups, DynamoDB, CloudTrail, REST API's, Fast-API, Node.js (Mandatory) Azure (Overview) (Optional) GCP (Overview) (Optional) Programming/IAC Skills: Python (Mandatory) Chef (Mandatory) Ansible (Mandatory) Terraform (Mandatory) Go (optional) Java (Optional) Candidate should be more than 4 years on cloud Development. Presently should be working on cloud Development. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
10.0 - 11.0 years
18 - 20 Lacs
Gurugram
Work from Office
Were Hiring: DevOps Specialist AWS Cloud Expert Are you an experienced DevOps Engineer with a passion for building robust, scalable cloud infrastructure? Were expanding our team and looking for AWS Cloud experts who thrive in fast-paced, high-impact environments. Role : DevOps Specialist AWS Cloud Location : GURUGRAM Experience : 10+ Years Key Responsibilities: Manage and optimize AWS cloud environments for multiple clients Automate high-availability (HA) clusters using Auto Scaling, ELB, and Route 53 Handle cost estimation and implement cost optimization strategies on AWS Monitor infrastructure health with Splunk, CloudWatch, and CloudTrail Ensure seamless deployment pipelines and infrastructure as code (IaC) What We’re Looking For: 10+ years in DevOps and AWS Cloud engineering Hands-on expertise in Terraform and infrastructure automation Deep knowledge of AWS services: EC2, ECS, EKS, S3, RDS, VPC, IAM, Lambda, CloudFormation, and more CI/CD pipeline implementation experience (Jenkins preferred) Agile/Scrum team collaboration experience Bonus: Experience with Java Spring Boot Strong Shell scripting for automation Why Join Us? Work with cutting-edge cloud technologies Collaborate with a passionate and skilled DevOps team Make a real impact with automation and innovation Interested? Apply here or reach out directly via DM! Know someone perfect for this role? Refer them and help shape the future of cloud!
Posted 1 week ago
7.0 - 12.0 years
18 - 30 Lacs
South Goa, Pune
Hybrid
We are looking for a DevOps leader with deep experience in python build/ci/cd ecosystem for an exciting and cutting edge stealth startup in Silicon Valley. Responsibilities: Design and implement complex CI/CD pipelines in python and leveraging cutting-edge python packaging, dependency management, and CI/CD practices Optimize speed and reliability of builds Define test automation tools, architecture and integration with CI/CD platforms, and drive TA implementation in python Implement configuration management to set standards and best practices Manage and optimize cloud infrastructure resources: GCP or AWS or Azure Collaborate with development teams to understand application requirements and optimize deployment processes Work closely with operations teams to ensure smooth transition of applications into production. Develop and maintain documentation for system configurations, processes, and procedures Eligibility: 5-12 years of experience in DevOps, with minimum 2-5years of experience on python build echo system Python packaging, distribution,concurrent builds, dependencies, environments, test framework integrations, linting. Pip, poetry, uv, flint CI/CD: pylint, coverage.py , cprofile, python scripting, docker, k8s, IaC (Terraform, Ansible, Puppet, Helm) Platforms (Teamcity (preferred) or Jenkins or Github Actions or CircleCI, TravisCI) Test Automation: pytest, unittest, integration tests, plyright (preferred) Cloud platforms: AWS or Azure or GCP and platform specific CI/CD services and tools. Familiarity with logging and monitoring tools (e.g. Prometheus, Grafana).
Posted 1 week ago
5.0 - 9.0 years
7 - 11 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Role: Lead Software Engineering The Team: Our team is responsible for the architecture, design, development, and maintenance of technology solutions to support the Sustainability business unit within Market Intelligence and other divisions. Our program is built on a foundation of inclusivity, enablement, and adaptability and respect which fosters an environment of open-communication and trust. We take pride in each team members accountability and responsibility to move us forward in our strategic initiatives. Our work is collaborative, we work transparently with others within our business unit and others across the entire organization. The Impact: As a Lead, Cloud Engineering at S&P Global, you will be instrumental in streamlining the software development and deployment of our applications to meet the needs of our business. Your work ensures seamless integration and continuous delivery, enhancing the platform's operational capabilities to support our business units. You will collaborate with software engineers and data architects to automate processes, improve system reliability, and implement monitoring solutions. Your contributions will be vital in maintaining high availability security and performance standards, ultimately leading to the delivery of impactful, data-driven solutions. Whats in it for you: Career Development: Build a meaningful career with a leading global company at the forefront of technology. Dynamic Work Environment: Work in an environment that is dynamic and forward-thinking, directly contributing to innovative solutions. Skill Enhancement: Enhance your software development skills on an enterprise-level platform. Versatile Experience: Gain full-stack experience and exposure to cloud technologies. Leadership Opportunities: Mentor peers and influence the products future as part of a skilled team. Key Responsibilities: Design and develop scalable cloud applications using various cloud services. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud security best practices and ensure compliance with industry standards. Monitor and optimize application performance and reliability in the cloud environment. Troubleshoot and resolve issues related to our applications and services. Stay updated with the latest cloud technologies and trends. Manage our cloud instances and their lifecycle, to guarantee a high degree of reliability, security, scalability, and confidence at any given time. Design and implement CI/CD pipelines to automate software delivery and infrastructure changes. Collaborate with development and operations teams to improve collaboration and productivity. Manage and optimize cloud infrastructure and services. Implement configuration management tools and practices. Ensure security best practices are followed in the deployment process. What Were Looking For: Bachelor's degree in Computer Science or a related field. Minimum of 10+ years of experience in a cloud engineering or related role. Proven experience in cloud development and deployment. Proven experience in agile and project management. Expertise with cloud services (AWS, Azure, Google Cloud). Experience in EMR, EKS, Glue, Terraform, Cloud security, Proficiency in programming languages such as Python, Java, Scala, Spark Strong Implementation experience in AWS services (e.g. EC2, ECS, ELB, RDS, EFS, EBS, VPC, IAM, CloudFront, CloudWatch, Lambda, S3. Proficiency in scripting languages such as Bash, Python, or PowerShell. Experience with CI/CD tools like Azure CI/CD. Experience in SQL and MS SQL Server. Knowledge of containerization technologies like Docker, Kubernetes. Nice to have - Knowledge of GitHub Actions, Redshift and machine learning frameworks Excellent problem-solving and communication skills. Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences.
Posted 1 week ago
12.0 - 15.0 years
40 - 60 Lacs
Hyderabad
Work from Office
Strong in JavaScript Frameworks Strong in HLDs & LLDs Strong into System design database Schemas Excellent in Coding Convention & Quality Standards Exp. in 3rd party skills (REST APIs, SOAP APIs, XML, JSON SaaS Application, AWS with S3, EKS, RDS, EC2 Required Candidate profile Building Application (Profilers, APM tools, Security Scanning Tools Exp in CI/CD tooling Competency frontend framework/Library ie React, Angular or NodeJS Developing Production Code TypeScript & React
Posted 1 week ago
1.0 - 3.0 years
11 - 16 Lacs
Pune
Work from Office
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZS’s Platform Developmentteam designs, implements, tests and supports ZS’s ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products & analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality productsautomated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring 1-3 years of experience in developing software, ideally building SaaS products and services Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Good hands on to work with AWS services (EC2, EMR, S3, Serverless stack, RDS, Sagemaker, IAM, EKS etc) Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Good to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com
Posted 1 week ago
10.0 - 15.0 years
35 - 40 Lacs
Chennai
Work from Office
We seek an experienced and dynamic DevOps Architect to lead technical initiatives and manage a high performing DevOps team in an IT services environment. This role combines technical expertise in AWS and DevOps practices with strong leadership and team management skills to manage the DevOps Team. Primary Skills: AWS Services: Expertise in AWS services such as EC2, S3, RDS, Lambda, and Kubernetes (EKS). DevOps Practices: Strong experience in implementing CI/CD pipelines, infrastructure automation, and tools like Jenkins, Terraform, and AWS CodePipeline. Cloud Architecture: Experience in designing secure, scalable, and cost-effective cloud infrastructure. Team Leadership: Proven ability to manage and mentor a team of DevOps engineers, track progress, and ensure timely delivery. Monitoring & Troubleshooting: Expertise in AWS CloudWatch, CloudTrail, and performance optimization techniques. Roles and Responsibilities - Cloud Infrastructure Design and Implementation: - Architect and oversee the deployment of secure, scalable, and cost-effective AWS cloud solutions using services like EC2, S3, RDS, Lambda, and Kubernetes (EKS). - Ensure AWS cloud architectures align with industry best practices, security standards, and client requirements. DevOps Strategy and Execution: - Work with the client team to define and implement DevOps strategies, including CI/CD pipeline creation, automated testing, and infrastructure automation using tools like Jenkins, Terraform, and AWS CodePipeline. - Optimize existing DevOps workflows to improve efficiency, reliability, and scalability. - Understand the client-side technical environment and projects and propose solutions and recommendations. Team Management and Leadership: - Manage and mentor a team of DevOps engineers, ensuring skill development and alignment with project goals. - Assign tasks, track progress, and ensure timely delivery of client deliverables. - Foster a collaborative, innovative, and results-driven team culture. Client Engagement and Stakeholder Collaboration: - Act as a key point of contact for clients, understanding their needs and translating business requirements into technical solutions. - Collaborate with cross-functional teams, including development, QA, and operations, to ensure seamless project execution. Monitoring, Troubleshooting, and Performance Optimization: - Implement monitoring, logging, and alerting systems using AWS CloudWatch, CloudTrail, and other tools. - Proactively identify and resolve performance bottlenecks and system issues to ensure high availability and reliability. Process Improvement and Innovation: - Continuously evaluate and recommend new tools, technologies, and practices to enhance team performance and service quality. - Drive the adoption of emerging trends such as DevSecOps and Infrastructure as Code (IaC)."
Posted 1 week ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Overview Technology for today and tomorrow The Boeing India Engineering & Technology Center (BIETC) is a 5500+ engineering workforce that contributes to global aerospace growth. Our engineers deliver cutting-edge R&D, innovation, and high-quality engineering work in global markets, and leverage new-age technologies such as AI/ML, IIoT, Cloud, Model-Based Engineering, and Additive Manufacturing, shaping the future of aerospace. People-driven culture This role will be based out of Bengaluru, India. Employer will not sponsor applicants for employment visa status Position Responsibilities: People & Strategy Hire, coach, and retain 25-40 full stack software developers and managers across India, and Poland, shaping a culture of psychological safety, inclusion, and mission focus. Manage resource allocation to support existing and emergent products, while growing career aspirations of talented software development teams. Partner with Manufacturing, Engineering, Supply-Chain, and AI leaders to translate capability gaps into clear product backlogs and measurable OKRs. Drive adoption of modern agile frameworks (Scrum/SAFe) and Lean software metrics (cycle time, MTTR, DORA). Lead quarterly architecture reviews, technology-radar sessions, and invest in upskilling developers on server-side rendering, micro-frontends, AWS artifacts and services, and generative-AI patterns. Define a robust and comprehensive software support model for products in production that ensures business continuity, cost effective support and high reliability of our products. Execution Excellence Own the full software development ecosystem for multiple products, from roadmap to production support, enforcing engineering best practices (code review, test automation, trunk-based development). Define and operate global DevSecOps pipelinesGit-based workflows, Jenkins/GitHub Actions/Argo CD, IaC (Terraform), SCA/SAST/DAST scanning, SBOM generation, and automated container promotion across IL2-IL5 environments. Champion observabilitycentralized logging, distributed tracing, custom metrics, synthetic testsusing tools such as Grafana, Prometheus, Splunk, OpenTelemetry. Ensure every service adheres to data-classification rules such as ITAR, EAR, and CUI. Basic Qualifications Related work experience, Relevant military experience, or advanced degree preferred but not required. 10+ years of professional software-engineering experience, 5+ years leading multi-disciplinary teams and managers that ship production software. Proven track record delivering large-scale full-stack solutions with React/TypeScript, Java 11+/Spring Boot, and Node.js/Express in production. Demonstrated mastery of CI/CD and DevSecOps in regulated environmentspipelines that embed security gates, artifact signing, and infrastructure-as-code for AWS (CloudFormation or Terraform). Deep knowledge of AWS GovCloud servicesVPC, EKS, S3, RDS/Aurora, Lambda, API Gateway, Secrets Manager, CloudWatch, and KMS. Hands-on experience running distributed monitoring/observability stacks (Grafana, Prometheus, Splunk, ELK, OpenTelemetry). Familiarity with data-classification & export-control frameworks (ITAR, EAR, CMMC) and ability to build compliant technical workflows. Strong understanding of contemporary AI/ML patterns (LLM orchestration, MLOps, vector search, edge inference) and how to integrate them into transactional systems. Ability to travel between India and Poland on some regular frequency. Preferred Qualifications A Masters or PhD in computer science or related fields from a top-rated institution. Prior leadership in an aerospace, defense, or highly regulated industry; exposure to design-to-manufacture value streams. Experience with domain-driven design, microservice & event-driven architectures (Kafka, SNS/SQS), and micro-frontend patterns (Module Federation). Background in global team orchestration (follow-the-sun release management, 247). Demonstrated working knowledge of secure coding guidelines (NIST 800-53, OWASP ASVS, STIG hardening). Leadership Competencies Customer-focusedframes technical decisions around user value and mission impact. Bar-raisersets a high technical bar, models quality code, and fosters a feedback-rich environment. Data-drivenuses leading & lagging metrics to improve predictability and reliability. Continuously challenges status quo with automation, self-service, and reusable platform components. Earns trust with cross-functional stakeholders by communicating effectively, managing risks transparently, and delivering on commitments. Best in classPromotes best practices that enable agile and rapid application development. People focusCoach, mentor and develop talented individuals that range from junior to very senior levels of experience. Team buildingBuild a cohesive teaming environment across Poland, India and the US development teams. What Youll Own in Your First 12 Months Modernize the existing global CI/CD pipeline to achieve less than 2 hour secure path-to-prod for priority apps. Ship two new AI-enabled apps with micro-services to production with 99.9% availability. Stand-up an enterprise observability stack that drives MTTR less than 30 minutes for critical workflows. Recruit & onboard 10 full stack software developers, DevOps engineers, and data engineers Publish a technology roadmap aligning React 18, Spring Boot 3, Java 21 LTS, and GenAI capabilities with product OKRs. Desired Skills (Preferred qualifications): Related work experience, Relevant military experience or advanced degree preferred but not required. Typical Education & Experience: Typically 21 or more years' related work experience or relevant military experience, advanced degree (Eg Bachelor or Master etc) preferred but not required. Relocation This position does offer relocation within INDIA. Applications for this position will be accepted until Jun. 10, 2025 Export Control This is not an Export Control position. Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift Not a Shift Worker (India)
Posted 1 week ago
5.0 - 10.0 years
13 - 23 Lacs
Bengaluru
Remote
Immediate hiring for DevOps Technical Specialist - EKS & Service Mesh Architect Position- DevOps Technical Specialist - EKS & Service Mesh Architect Experience-5+ Years Location-Remote Notice period-Immediate Responsibilities: Implement and optimize EKS configurations, including node management, scaling policies, and security controls. Troubleshoot complex issues within EKS environment, including networking, storage, and compute. Define and implement best practices for EKS governance, cost optimization, and upgrades. Design, implement, and manage a robust service mesh infrastructure (e.g., Istio, Linkerd) to enhance inter-service communication, security (mTLS), traffic management (routing, load balancing), and observability. Define and enforce service mesh policies for , authorization, and encryption. Implement advanced service mesh features such as traffic shifting, canary deployments, and fault injection for resilience testing. Integrate service mesh with existing monitoring, logging, and tracing systems. Develop and maintain comprehensive IaC using tools like Terraform or CloudFormation to provision and manage EKS clusters and related infrastructure. Automate deployment pipelines, configuration management, and operational tasks to improve efficiency and consistency. Collaborate closely with development teams to understand their application requirements and provide guidance on best practices for containerization, Kubernetes deployment, and service mesh integration. Act as a technical Subject Matter Expert (SME) for EKS and Service Mesh within the organization. Apply to : hrteam10@ontimesolutions.in
Posted 1 week ago
6.0 - 9.0 years
27 - 42 Lacs
Bengaluru
Work from Office
About the role As a SDET Data Test Automation Engineer, you will make an impact by ensuring the integrity of data transformations on our data platform which works with large datasets and to enhance our testing capabilities with UI and API automation. You will be a valued member of the AI Analytics group and work collaboratively with development and product teams to deliver robust scalable and high-performing data test tools. In this role, you will: Design develop and maintain automated test scripts for data validation transformation API and UI testing Conduct data testing using Python frameworks like Pandas, PySpark & Pytest Analyze test results identify issues and work on resolutions Ensure that automated tests are integrated into the CICD pipeline Work on data transformation processes including validating rule-based column replacements and exception handling Collaborate with development and product teams to identify test requirements and strategies UI and API test automation using Playwright and Requests Validate large datasets to ensure data integrity and performance Work model We believe hybrid work is the way forward as we strive to provide flexibility wherever possible. Based on this role’s business requirements, this is a hybrid position requiring 3 days a week in a client or respective location in Cognizant office. Regardless of your working arrangement, we are here to support a healthy work-life balance though our various wellbeing programs. What you must have to be considered Strong programming skills in Python Proficiency in data manipulation and testing libraries like Pandas, PySpark & Pytest Hands on experience with AWS services like S3 Lambda Familiarity with SQL for data transformation tasks These will help you stand out Strong experience with API and UI test automation tools and libraries Excellent problem-solving and analytical skills & strong communication and teamwork abilities Familiarity with CICD pipelines and tools like Jenkins, Docker & Kubernetes Good to have experience with databases like Trino iceberg, Snowflake and Postgres Good to have experience on Polars Fugue Experience with cloud tools like Kubernetes EKS, AWS Glue & ECS Certification : AWS, Python We're excited to meet people who share our mission and can make an impact in a variety of ways. Don't hesitate to apply, even if you only meet the minimum requirements listed. Think about your transferable experiences and unique skills that make you stand out as someone who can bring new and exciting things to this role.
Posted 1 week ago
3.0 - 6.0 years
8 - 10 Lacs
Noida
Work from Office
Role Overview We are looking for a skilled and enthusiastic DevOps Engineer with 14 years of experience to join our team. This role is ideal for candidates passionate about automation, cloud infrastructure, and modern DevOps practices, particularly within the AWS ecosystem. Key Responsibilities Build and manage robust CI/CD pipelines using tools like AWS CodePipeline, Jenkins, or GitHub Actions. Deploy and maintain infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation. Design, implement, and manage scalable cloud infrastructure on AWS (EC2, ECS, EKS, RDS, Lambda, S3, etc.). Containerize applications using Docker and orchestrate with Amazon ECS or EKS (Kubernetes). Monitor infrastructure using AWS CloudWatch, CloudTrail, and integrate with tools like Prometheus and Grafana. Ensure security best practices across AWS resources, including IAM, VPC, encryption, and backups. Collaborate with development and QA teams to streamline deployments and ensure system reliability. Automate repetitive tasks and deployments to improve efficiency and reduce human error. Must-Have Skills 1-4 years of experience in DevOps, or Cloud Engineering roles. Hands-on experience with AWS services (EC2, S3, IAM, Lambda, RDS, VPC, CloudWatch, etc.). Proficiency with CI/CD tools like Jenkins, AWS CodeBuild/CodeDeploy, GitLab CI/CD, or GitHub Actions. Strong scripting skills in Bash, Python, or Shell. Experience with Docker and container orchestration using ECS or EKS. Working knowledge of Infrastructure as Code with Terraform or CloudFormation. Git and version control best practices.
Posted 1 week ago
2.0 - 4.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Zeta Global is looking for an experienced Machine Learning Engineer with industry-proven hands-on experience of delivering machine learning models to production to solve business problems. To be a good fit to join our AI/ML team, you should ideally: Be a thought leader that can work with cross-functional partners to foster a data-driven organisation. Be a strong team player, have experience contributing to a large project as part of a collaborative team effort. Have extensive knowledge and expertise with machine learning engineering best-practices and industry standards. Empower the product and engineering teams to make data-driven decisions. What you need to succeed: 2 to 4 years of proven experience as a Machine Learning Engineer in a professional setting. Proficiency in any programming language (Python preferable). Prior experience in building and deploying Machine learning systems. Experience with containerization: Docker & Kubernetes. Experience with AWS cloud services like EKS, ECS, EMR, Lambda, and others. Fluency with workflow management tools like Airflow or dbt. Familiarity with distributed batch compute technologies such as Spark. Experience with modern data warehouses like Snowflake or BigQuery. Knowledge of MLFlow, Feast, and Terraform is a plus.
Posted 1 week ago
3.0 - 5.0 years
4 - 6 Lacs
Bengaluru
Work from Office
About the role: The expectations from this role are two-fold. Backend Developer who can perform data analysis and create outputs for Gen AI related tasks while supporting standard data analysis tasks. Gen AI expert who can effectively understand and translate business requirements and provide a Gen AI powered output independently. The person should be able to: Analyse data as needed from the data tables to generate data summaries and insights that are used for Gen AI and non-Gen AI work. Collaborate effectively with cross-functional teams (engineering, product, business) to ensure alignment and understanding. Create and use AI assistants to solve business problems. Comfortable providing and advocating recommendations for a better user experience. Support product development teams, enabling them to create and manage APIs that interact with the Gen AI backend data and create a next gen experience for our clients. Visualize and create data flow diagrams and materials required for effective coordination with devops teams. Manage the deployment of related APIs in Kubernetes or other relevant spaces Provide technical guidance to the UI development and data analyst teams on Gen AI best practices. Coordinate with business teams to ensure the outputs are aligned with expectations. Continuously integrating new developments in the Gen AI space into our solutions and provide product and non-product implementation ideas to fully leverage potential of Gen AI. The person should have: Proven experience as a data analyst, with a strong track record of delivering impactful insights and recommendations. Strong working knowledge of OpenAI, Gemini or other Gen AI platforms, and prior experience in creating and optimizing Gen AI models. Familiarity with API and application deployment, data pipelines and workflow automation. High-agency mindset with strong critical thinking skills. Strong business acumen to proactively identify what is right for the business. Excellent communication and collaboration skills. Technical Skills: Python SQL AWS Services (Lambda, EKS) Apache Airflow CICD (Serverless Framework) Git Jira / Trello It will be great to have: Good understanding of marketing/advertising product industry. At least 1 Gen AI project in production. Strong programming skills in Python or similar languages. Prior experience in working as Devops engineer or have worked closely with Devops. Strong background in data management. What we offer: The opportunity to be at the forefront of AI-powered data analysis and make a real impact. Instant gratification see your work in real action immediately with our agile product development timelines. A collaborative and supportive work environment where you can learn and grow. Competitive salary and benefits package.
Posted 2 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Pune, Bengaluru
Work from Office
Location: Pune / Bangalore (Onsite Only) Experience: 6+ Years (4+ years relevant in Camunda) Note: No Remote Option | Subcon Role | Must be ready to work as a Subcon Screening: Strict profile screening before submission Primary Skills & Qualifications: Hands-on experience with Camunda v8 (design, coding, debugging) Ability to translate business requirements into Camunda workflows Proficient in Java, Spring Boot, and Microservices architecture REST / JSON API integration experience Exposure to QA, Automation, and CI/CD pipelines Familiar with DevOps tools Kubernetes, Terraform, Helm charts, EKS Good to have: Frontend experience with ReactJS or Angular Soft Skills: Strong communication and stakeholder management Effective collaboration with cross-functional teams Problem-solving and debugging capabilities
Posted 2 weeks ago
6.0 - 8.0 years
0 - 2 Lacs
Bengaluru
Hybrid
Role & responsibilities: Outline the day-to-day responsibilities for this role. Preferred candidate profile: Specify required role expertise, previous job experience, or relevant certifications.
Posted 2 weeks ago
8.0 - 11.0 years
35 - 50 Lacs
Chennai
Work from Office
Job Summary We are seeking an experienced Infra. Architect with 8 to 11 years of experience to join our dynamic team. The ideal candidate will have expertise in Infra CI CD Pipelines Bicep Azure DevOps Terraform PowerShell and ARM. This role requires domain experience in Operations and Process Manufacturing Industry. The work model is hybrid with day shifts and no travel required. Responsibilities Design and implement infrastructure solutions using Infra CI CD Pipelines Bicep Azure DevOps Terraform PowerShell and ARM. Oversee the deployment and management of cloud infrastructure on Azure. Provide technical guidance and support to development teams to ensure seamless integration and deployment processes. Collaborate with cross-functional teams to understand infrastructure requirements and deliver solutions that meet business needs. Monitor and optimize the performance scalability and reliability of infrastructure systems. Develop and maintain infrastructure as code (IaC) scripts to automate the provisioning and management of resources. Ensure compliance with security policies and best practices in all infrastructure activities. Troubleshoot and resolve infrastructure-related issues in a timely manner. Stay updated with the latest industry trends and technologies to continuously improve infrastructure solutions. Document infrastructure designs processes and procedures for future reference. Participate in code reviews and provide constructive feedback to team members. Contribute to the development of infrastructure standards and best practices. Support the operations team in maintaining high availability and performance of production systems. Qualifications Possess strong expertise in Infra CI CD Pipelines Bicep Azure DevOps Terraform PowerShell and ARM. Have a deep understanding of cloud infrastructure particularly on Azure. Demonstrate experience in the Operations and Process Manufacturing Industry. Exhibit excellent problem-solving and troubleshooting skills. Show proficiency in scripting and automation using PowerShell. Have a solid understanding of infrastructure as code (IaC) principles. Display strong communication and collaboration skills. Be able to work effectively in a hybrid work model. Have a commitment to continuous learning and professional development. Possess a proactive and results-oriented mindset. Show attention to detail and a high level of accuracy in work. Demonstrate the ability to work independently and as part of a team. Have a passion for technology and innovation. Certifications Required Azure Solutions Architect Expert Terraform Associate Azure DevOps Engineer Expert
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2