Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
13 - 17 Lacs
Ahmedabad
Work from Office
About the Role: Grade Level (for internal use): 11 The Team We are looking for a highly motivated, enthusiastic, and skilled engineering lead for Market Intelligence. We strive to deliver solutions that are sector-specific, data-rich, and hyper-targeted for evolving business needs. Our Software development Leaders are involved in the full product life cycle, from design through release. The Impact: We are looking for a highly motivated Senior Cloud Engineer with a strong background in Tech Operations to join our IT team. The successful candidate will be responsible for designing, deploying, and managing cloud infrastructure while also ensuring seamless tech operations. This role requires a blend of cloud engineering expertise and operational knowledge to enhance system performance and reliability. Whats in it for you You will have the opportunity to collaborate with a global stakeholder to analyze and formalize the requirements. Fast-paced agile environment that deals with huge volumes of data, so youll have an opportunity to sharpen your software development and data skills and work on an emerging technology stack. Work on Tier-1 applications that are in the critical path for the business. Ability to work on cutting edge technologies. Ability to grow within the organization thats part of the global team. Responsibilities Design, implement, and manage cloud-based solutions across multiple platforms (AWS, Azure, Google Cloud). Develop and maintain cloud architecture and infrastructure, ensuring scalability and security. Implement automation for cloud deployment and management using Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Monitor cloud resources for performance, cost management, and security compliance. Oversee day-to-day tech operations, ensuring systems are reliable, available, and performant. Collaborate with development teams to support application deployment and operational needs. Develop and enforce operational policies and procedures to enhance system reliability and security. Manage incident response and problem resolution, ensuring minimal downtime and impact on business operations. Collaboration and Documentation: Work closely with cross-functional teams to identify and implement improvements in cloud and operational processes. Conduct regular performance reviews and capacity planning to ensure optimal resource utilization. Document cloud architecture, operational procedures, and best practices for knowledge sharing and training. What Were Looking For Bachelors degree in Computer Science, Information Technology, or a related field. 8-12 years of experience in cloud engineering and tech operations. Strong expertise in cloud platforms (AWS, Azure, Google Cloud) and services. Proficient in scripting languages (e.g., Python, Bash) and automation tools. Experience with container orchestration tools (e.g., Kubernetes, Docker). Solid understanding of networking, security protocols, and compliance in cloud environments. Excellent analytical and troubleshooting skills, with a focus on operational excellence. Strong communication skills and ability to work collaboratively in a team environment. Preferred Qualifications: Relevant cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Familiarity with DevOps practices and CI/CD pipelines. Experience with monitoring tools (e.g., Prometheus, Grafana, ELK Stack). About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction.For more information, visit www.spglobal.com/marketintelligence . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group)
Posted 1 week ago
10.0 - 15.0 years
12 - 17 Lacs
Gurugram
Work from Office
About the Role: Grade Level (for internal use): 12 S&P Global Mobility The Role Associate Director - Senior DevOps Engineer T he Team Lead a newly-established horizontal function within the Plan & Build technology team, where innovation and collaboration are at the forefront. The team will integrate existing DevOps capabilities with exciting greenfield opportunities, driving the development of new products while enhancing the support for our current offerings. The Impact In an environment where agility is paramount, this role is instrumental in fostering that agility through the implementation of automation, immutable pipelines, and hands-free deployments. By streamlining processes, you will enable development teams to deliver value more efficiently and predictably. Be part of a dynamic and expanding team focused on creating cutting-edge automotive forecasting products that are reshaping the industry. Whats in it for y ou This position offers the chance to work on both existing and innovative new products across all facets of the automotive forecasting business, from vehicles to components. You will have the opportunity to design, build, and enhance every aspect of the delivery pipeline and infrastructure definitions. Engage in groundbreaking projects that leverage agentic workflows and generative AI, driving innovation in the automotive industry. Responsibilities Delivery Pipeline Development Design, implement, and maintain robust Continuous Integration and Continuous Deployment (CI/CD) pipelines to streamline software delivery processes. Review existing delivery pipelines and identify areas for improvement, ensuring best practices are followed. Technical Operations Onboarding Lead the onboarding process for technical operations, focusing on observability, incident management, telemetry, and other operational best practices. Collaborate with cross-functional teams to ensure effective knowledge transfer and training in DevOps practices. Cloud Infrastructure Management Review and build repeatable patterns for cloud infrastructure deployment, ensuring scalability, reliability, and security. Implement Infrastructure as Code ( IaC ) practices using tools such as Terraform, CloudFormation, or similar technologies. Monitoring & Observability Establish and maintain monitoring and observability tools to ensure system performance and reliability. Develop incident management processes and response strategies to minimize downtime and enhance system resilience. Collaboration & Continuous Improvement Work closely with development, QA, and operations teams to foster a culture of collaboration and continuous improvement. Stay updated on industry trends and best practices in DevOps and cloud technologies to drive innovation within the team. What Were Looking For 10+ years of experience in a DevOps role, with a strong focus on CI/CD, cloud infrastructure, and operational processes. Proven experience with cloud platforms such as AWS (preferred) , Azure, or Google Cloud. Strong knowledge of containerization technologies (Docker, Kubernetes) , configuration management tools (Ansible, Chef, Puppet) and source code management tools (GitLab, Azure DevOps ) Excellent problem-solving skills and the ability to work effectively in a fast-paced environment. Strong communication and interpersonal skills, with the ability to collaborate with diverse teams. About Company Statement S&P Global deliver s essential intelligence that powers decision making. We provide the worlds leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, youll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand todays market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group)
Posted 1 week ago
4.0 - 9.0 years
14 - 18 Lacs
Bengaluru
Work from Office
FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity " As a DevOps Engineer II adapt at implementing and managing automation tools, collaborates with development teams to optimize processes. Responsible for configuring and maintaining CI/CD pipelines, ensuring efficient software deployment and system reliability. Proficient in cloud platforms, scripting, and monitoring tools to enhance overall system performance." - VP, Software Engineering. What Youll Contribute Collaborate in a DevOps environment where you will work closely with developers in automating new applications, features, and services. Support enterprise standards around technology, security compliance and systems interoperability for FICO products/solutions. Assist in developing scripts to automate development operations functions or processes Cloud Infrastructure Automation & Orchestration. Research and implement automation tools to replace existing manual processes and maintenance tasks. Collaborate with other engineers on design, analysis, architecture/modeling, implementation, unit testing, code reviews and process enhancements. Optimize performance and scalability as necessary to meet business goals. Deliver customer solutions using open source and commercial technologies. Deliver full automation pipeline using CI/CD/DevOps concepts. What Were Seeking Bachelors/Masters in Computer Science or related disciplines, or relevant software development experience. 4+ years of experience operating highly-available, redundant, public cloud architectures solutions utilizing IaaS and PaaS. Hands-on experience with DevOps tools such as Jenkins, GitLab CI, Docker, Kubernetes, and cloud providers like AWS. Programming/ScriptingProficiency in scripting languages (Goovy,Python, Shell, or others). Infrastructure as Code (IaC)Experience with tools like Terraform, Ansible, or CloudFormation. Operating SystemsStrong understanding of Linux/Unix systems administration, window and experience with cloud operating systems. Version ControlProficiency with Git and GitHub/GitLab/Bitbucket. Containerization & OrchestrationSolid experience in container technologies (Docker, Kubernetes). Networking & SecurityKnowledge of networking, security, and web protocols. Problem SolvingStrong troubleshooting skills with the ability to analyze and solve complex technical issues. Collaboration & CommunicationStrong interpersonal skills and the ability to work effectively with cross-functional teams. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy
Posted 1 week ago
6.0 - 7.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 7+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
4.0 - 7.0 years
8 - 15 Lacs
Pune
Work from Office
Design, implement & optimize secure, deployment, automation & maintenance of cloud applications & services. Strong scripting skills in Python, Bash or PowerShell AWS/Azure - EC2, S3, RDS, Lambda, VPC, IAM CI/CD tools ( Gitlab CI/CD, Jenkins) Required Candidate profile AWS/GCP/Azure Certified DevOps Engineer, AWS/ GCP/Azure Certified Sys Ops Administrator AWS/ GCP/Azure Certified Solutions Architect
Posted 1 week ago
12.0 - 22.0 years
40 - 50 Lacs
Hyderabad
Work from Office
Cloud Architect Description: Be a part of our success story. Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch. What are we looking for: We are looking for a talented Cloud Architect to lead the design, implementation, and optimization of cloud solutions across various platforms. You will be responsible for creating scalable, secure, and cost-eective cloud architectures that align with business objectives. The role involves working closely with engineering teams, business stakeholders, and other architects to ensure that cloud solutions are deployed efficiently and meet the needs of the organization. Role: Cloud Architect Location: Hyderabad Shift timings: Overlapping US hours Years of experience: 12+ Years Key Responsibilities: Cloud Architecture Design : Design and implement scalable, resilient, and secure cloud architectures that align with business and technical requirements. Solution Optimization: Continuously assess and optimize cloud solutions for cost efficiency, performance, and scalability. Cloud Security: Implement and enforce best practices for security across cloud environments, ensuring compliance with industry standards. Collaboration with Stakeholders: Work with business leaders, DevOps, engineers, and operations teams to understand needs and design effective cloud solutions. Technology Strategy: Define and implement a comprehensive cloud strategy for the organization, aligning technology with business goals. Cloud Migration: Lead the planning and execution of cloud migration projects, ensuring minimal disruption and maximum value. Automation & IaC: Develop and implement Infrastructure as Code (IaC) solutions using tools like Terraform or CloudFormation to automate the cloud infrastructure. Cloud Governance: Establish and maintain governance practices to ensure cloud infrastructure remains secure, compliant, and cost-effective. Documentation: Maintain detailed documentation for all cloud architecture designs, configurations, and procedures. Disaster Recovery: Design and implement disaster recovery and business continuity plans for cloud infrastructure. Mentorship: Provide guidance and mentorship to junior cloud engineers and architects. Required Skills and Qualifications: 12+ years of experience in cloud architecture or related roles, with strong hands-on experience in cloud platform design. Extensive experience with cloud platforms (AWS, Azure, or multi-cloud environments). In-depth knowledge of cloud security practices and protocols. Expertise in Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Strong understanding of cloud migration strategies and best practices. Hands-on experience with containerization (e.g., Docker, Kubernetes) and microservices architecture. Strong problem-solving skills with the ability to identify and resolve complex cloud infrastructure challenges. Excellent communication, leadership, and collaboration skills. Preferred Qualifications: Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Experience with DevOps practices and CI/CD pipelines. Knowledge of serverless architecture (e.g., AWS Lambda, Azure Functions). Experience with cloud-native application development and related tools. Expertise in cloud cost optimization strategies and tools. We are Navigators in the Age of Transformation: We use sophisticated technology to transform clients into the digital age, but our top priority is our positive impact on human experience. We ease anxiety and fear around digital transformation and replace it with opportunity. Launch IT is an equal opportunity employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Launch IT is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. About Company: About Launch IT Launch IT India is wholly owned subsidiary of The Planet Group (http://www.launchcg.com; http://theplanetgroup.com ) a US company, offers attractive compensation and work environment for the prospective employees. Launch is an entrepreneurial business and technology consultancy. We help businesses and people navigate from current state to future state.Technology, tenacity, and creativity fuel our solutions with offices in Bellevue, Sacramento, Dallas, San Francisco, Hyderabad & Washington D.C. https://www.linkedin.com/company/launch-consulting-group- india/
Posted 1 week ago
8.0 - 13.0 years
10 - 15 Lacs
Chennai
Work from Office
Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for Associate Software Architect with 8+ years of experience in AWS Cloud Infrastructure design, maintenance, and operations. Key Responsibilities: Infrastructure Architecture, Design & Management Understand the existing architecture to identify and implement improvements. Design and execute the initial implementation of infrastructure. Defining end-to-end DevOps architecture aligned with business goals and technical requirements. Architect and manage AWS cloud infrastructure for scalability, high availability, and cost efficiency using services like EC2, Auto Scaling, Load Balancers, and Route 53 to ensure high availability and fault tolerance. Design and implement secure network architectures using VPCs, subnets, NAT gateways, security groups, NACLs, and private endpoints. CI/CD Pipeline Management- Design, build, test and maintain AWS DevOps pipelines for automated deployments across multiple environments (dev, staging, production). Security & Compliance-Enforce least privilege access controls to enhance security. Monitoring & Optimization-Centralize monitoring with AWS CloudWatch, CloudTrail, and third-party tools. And set up metrics, dashboards, alerts Infrastructure as Code (IaC) Write, maintain, and optimize Terraform templates/AWS CloudFormation/AWS CDK for infrastructure provisioning. Automate resource deployment across multiple environments (DEV, QA, UAT & Prod) and configuration management. Managing infrastructure lifecycle through version-controlled code Modular and reusable IaC design. License Management Use AWS License Manager to track and enforce software license usage Manage BYOL (Bring Your Own License) models for third-party tools like GraphDB Integrate license tracking with AWS Systems Manager, EC2, and CloudWatch Define custom license rules and monitor compliance across accounts using AWS Organizations Documentation & Governance Create and maintain detailed architectural documentation. Participate in code and design reviews to ensure compliance with architectural standards. Establish architectural standards and best practices for scalability, security, and maintainability across development and operations teams. Interpersonal Skills Effective communication and collaboration with stakeholders to gather and understand technical and business requirements Strong grasp of Agile and Scrum methodologies for iterative development and team coordination Mentoring and guiding DevOps engineers while fostering a culture of continuous improvement and DevOps best practices Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at www.siemens.com/careers
Posted 1 week ago
10.0 - 15.0 years
12 - 22 Lacs
Pune
Hybrid
So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 week ago
6.0 - 8.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
5.0 - 7.0 years
8 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly skilled and motivated Senior Develper to join our team, with strong expertise in Python , and deep experience in building intelligent agentic systems using AWS Bedrock Agents and AWS Q Workflows . This role focuses on building end-to-end agentic task assistance solutions that execute complex workflows and enable seamless orchestration across systems. You will play a key role in creating smart automation that bridges front office interactions (customer-facing systems) with mid and back office operations (e.g., finance, fulfillment, compliance), empowering enterprise-grade digital transformation. How will you make an impact? Design, develop, and maintain scalable full-stack applications using Python . Build intelligent task agents leveraging AWS Bedrock Agents to manage and automate multi-step workflows. Integrate and orchestrate AWS Q Workflows to handle complex, enterprise-level task execution and decision-making processes. Enable contextual task handoff between front office and mid/back office systems, ensuring smooth operational continuity. Collaborate closely with cross-functional teams including product, DevOps, and AI/ML engineers to deliver secure, efficient, and intelligent systems. Write clean, maintainable code and contribute to architecture and design decisions for highly available agentic systems. Monitor, debug, and optimize live systems and workflows to ensure robust performance at scale. Have you got what it takes? 6+ years of full-stack development experience with strong hands-on skills in Python . Proven expertise in designing and deploying intelligent agents using AWS Bedrock Agents . Solid experience with AWS Q Workflows , including building and managing complex, automated workflow orchestration. Demonstrated ability to integrate AI-powered agents with enterprise systems and back-office applications. Experience building microservices and RESTful APIs within an AWS cloud-native architecture. Understanding of enterprise operations and workflow handoffs between business layers (front, mid, and back office). Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform or CloudFormation). Strong problem-solving skills, system thinking, and attention to detail. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor
Posted 1 week ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
12.0 - 15.0 years
35 - 60 Lacs
Chennai
Work from Office
AWS Solution Architect: Experience in driving the Enterprise Architecture for large commercial customers Experience in healthcare enterprise transformation Prior experience in architecting cloud first applications Experience leading a customer through a migration journey and proposing competing views to drive a mutual solution. Knowledge of cloud architecture concepts Knowledge of application deployment and data migration Ability to design high availability applications on AWS across availability zones and availability regions Ability to design applications on AWS taking advantage of disaster recovery design guidelines Design, implement, and maintain streaming solutions using AWS Managed Streaming for Apache Kafka (MSK) Monitor and manage Kafka clusters to ensure optimal performance, scalability, and uptime. Configure and fine-tune MSK clusters, including partitioning strategies, replication, and retention policies. Analyze and optimize the performance of Kafka clusters and streaming pipelines to meet high-throughput and low-latency requirements. Design and implement data integration solutions to stream data between various sources and targets using MSK. Lead data transformation and enrichment processes to ensure data quality and consistency in streaming applications Mandatory Technical Skillset: AWS Architectural concepts - designs, implements, and manages cloud infrastructure AWS Services (EC2, S3, VPC, Lambda, ELB, Route 53, Glue, RDS, DynamoDB, Postgres, Aurora, API Gateway, CloudFormation, etc.) Kafka Amazon MSK Domain Experience: Healthcare domain exp. is required Blues exp. is preferred Location – Pan India
Posted 1 week ago
7.0 - 8.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
3.0 - 5.0 years
50 - 55 Lacs
Bengaluru
Work from Office
About the Opportunity Job TypeApplication 31 July 2025 TitleSenior Analyst Programmer DepartmentTechnology - Corporate Enablers (CFO & CE Technology) LocationBanaglore , India Reports ToSenior Manager Type Department Overview The CFO and CE Cloud Technology function provides systems development, implementation and support services for FILs Corporate enablers Team. We support several functions spanning across Business Finance & Management Accounting, Financial Accounting & Analytics, Taxation, Global Procurement, Corporate Treasury, and several other teams in all of FILs international locations, including UK, Japan, China and India. We provide IT services to the Fidelity International businesses, globally. These include development and support of business functions that underpin our financial accounting and decision making for global CFO Orgs, and we implement multiple systems including ERP platforms, home grown apps and third party products. We are system providers to key process lifecycles such as Procure to Pay (P2P/Global Procurement), Record to Report (R2R), Order to Cash (O2C) and Acquire to Retire (A2R). We also manage systems to enable cash management, forex trading and treasury operations across the Globe. We own warehouses that consolidate data from across the organisations functions to provide meaningful insights. We are seeking a skilled and experienced Python Developer to join our team. The ideal candidate will have a strong background in API development and PLSQL Store procedures along with good understanding of Kubernetes,AWS,SnapLogic cloud-native technologies.This role requires deep technical expertise and the ability to work in a dynamic and fast-paced environment. Essential Skills Must have technical skills Knowledge of latest Python frameworks and technologies (e.g., Django, Flask, FastAPI) Experience with Python libraries and tools (e.g., Pandas, NumPy, SQLAlchemy) Strong experience in designing, developing, and maintaining RESTful APIs. Familiarity with API security, authentication, and authorization mechanisms (e.g., OAuth, JWT) Good experience and hands-on knowledge of PL/SQL (Packages/Functions/Ref cursors) Experience in development & low-level design of Warehouse solutions Familiarity with Data Warehouse, Datamart and ODS concepts Knowledge of data normalisation and Oracle performance optimisation techniques Good to have technical skills: Hands-on experience with Kubernetes for container orchestration Knowledge of deploying, managing, and scaling applications on Kubernetes clusters Proficiency in AWS services (e.g., EC2, S3, RDS, Lambda). Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Experience with SnapLogic cloud-native integration platform. Ability to design and implement integration pipelines using SnapLogic. Key Responsibilities Develop and maintain high-quality Python code for API services. Design and implement containerized applications using Kubernetes. Utilize AWS services for cloud infrastructure and deployment. Create and manage integration pipelines using SnapLogic. Write and optimize PL/SQL stored procedures for database operations. Collaborate with cross-functional teams to deliver high-impact solutions. Ensure code quality, security, and performance through best practices. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 5 to 7 years of experience with application development on Python language, API development along with Oracle RDBMS, SQL, PL/SQL Personal Characteristics Excellent communication skills both verbal and written Strong interest in Technology and its applications Self-motivated and Team Player Ability to work under pressure and meet deadlines Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.
Posted 1 week ago
10.0 - 15.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Overview Develop customized solutions within the Salesforce platform to support critical business functions and meet project requirements and owns end-to-end salesforce user story development work. Maintain a flexible and proactive work environment to facilitate a quick response to changing project requirements. Collaborate with business, managers and end users as necessary to analyze development objectives and capability requirements, including specifications for user stories, user interface, customized applications, component and interactions with internal Salesforce instances. Provide system administration development support of internal and customer-facing Salesforce environment, especially related to customized applications, user permissions, security settings, custom objects and workflow. Responsibilities Attend daily scrum calls with Business and project teams. Discuss and understand Salesforce Service Cloud development need aligned with the business objectives Own the user stories assigned for end-to-end development and testing Meet the sprint assignments within the target timeline Ensure to follow the best practices of coding and development as well as configuration of the platform Daily log the update in the ADO and keep requirements documents up to date for any new requirements or change requests. Keep track of issues and get it resolved while working with other stakeholders Leverage Agentforce for Developers to improve the build quality and speed. works closely with stakeholders to identify goals, develop best practices for development, and analyze current processes to determine what can be improved to achieve their desired outcome. Defining configuration specifications and business analysis requirements, performing quality assurance, defining reporting, and alerting requirements Gather and understand requirements for full life cycle of cases and user journey Help document, and maintain system processes, Report on common sources of issues or questions, and make recommendations to the Project team. Communicate key insights and findings to the Project team. Qualifications Bachelors degree in IT, Computer Science or equivalent with 10+ years of IT experience, 6+ years as a Salesforce Developer Thorough knowledge of the Salesforce Programming, standard and custom builds Well-versed with Agile methodologies and processes. Attention to detail and experience in gathering requirements. Ability and desire to work with a high degree of independence and ownership in a geographically distributed team consisting of other developers and project management resources. Excellent written and verbal communication skills Good to have 1+ Experience with Agentforce for Developers
Posted 1 week ago
3.0 - 5.0 years
3 - 6 Lacs
Pune
Work from Office
What You'll Do: CI/CD Pipeline Management: Design, implement, and maintain robust CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps, CircleCI) to automate the build, test, and deployment processes across various environments (Dev, QA, Staging, Production). Infrastructure as Code (IaC): Develop and manage infrastructure using IaC tools (e.g., Terraform, Ansible, CloudFormation, Puppet, Chef) to ensure consistency, repeatability, and scalability of our cloud and on-premise environments. Cloud Platform Management: Administer, monitor, and optimize resources on cloud platforms (e.g., AWS, Azure, GCP), including compute, storage, networking, and security services. Containerization & Orchestration: Implement and manage containerization technologies (e.g., Docker) and orchestration platforms (e.g., Kubernetes) for efficient application deployment, scaling, and management. Monitoring & Alerting: Set up and maintain comprehensive monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK Stack, Nagios, Splunk, Datadog) to proactively identify and resolve performance bottlenecks and issues. Scripting & Automation: Write and maintain scripts (e.g., Python, Bash, PowerShell, Go, Ruby) to automate repetitive tasks, improve operational efficiency, and integrate various tools. Version Control: Manage source code repositories (e.g., Git, GitHub, GitLab, Bitbucket) and implement branching strategies to facilitate collaborative development and version control. Security & Compliance (DevSecOps): Integrate security best practices into the CI/CD pipeline and infrastructure, ensuring compliance with relevant security policies and industry standards. Troubleshooting & Support: Provide Level 2 support, perform root cause analysis for production incidents, and collaborate with development teams to implement timely fixes and preventive measures. Collaboration: Work closely with software developers, QA engineers, and other stakeholders to understand their needs, provide technical guidance, and foster a collaborative and efficient development lifecycle. Documentation: Create and maintain detailed documentation for infrastructure, processes, and tools.
Posted 1 week ago
4.0 - 7.0 years
0 - 0 Lacs
Pune
Hybrid
So, what’s the role all about? Person is responsible for designing, implementing, and managing the infrastructure and automation processes that enable seamless software delivery and system reliability at scale. This role involves leading DevOps initiatives, optimizing cloud environments, and collaborating across development, QA, and operations teams to enhance efficiency, security, and scalability. How will you make an impact? Lead DevOps Strategy: Design and execute DevOps practices, focusing on automation, scalability, and continuous improvement of development pipelines and infrastructure. Build and Manage CI/CD Pipelines: Develop and maintain highly efficient CI/CD pipelines to streamline the software release process. Infrastructure Automation: Implement and manage infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. Cloud Infrastructure Management: Optimize and manage cloud environments (AWS, Azure, GCP), ensuring high availability and security. Monitoring and Performance: Build and maintain monitoring solutions to ensure system reliability, performance, and early issue detection. Collaboration and Mentorship: Collaborate with cross-functional teams, providing guidance on best practices for automation, security, and deployment. Security and Compliance: Ensure compliance with security standards and implement DevSecOps practices for secure software delivery. Lead DevOps Strategy: Design and execute DevOps practices, focusing on automation, scalability, and continuous improvement of development pipelines and infrastructure Build and Manage CI/CD Pipelines: Develop and maintain highly efficient CI/CD pipelines to streamline the software release process. Infrastructure Automation: Implement and manage infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation. Have you got what it takes? Bachelor’s degree in computer science, or equivalent. Development of User-Interface capabilities, software functionality, including design, implementation, developer level tests and support. Taking part in developing a complex web-based architecture for large organizations. Developing software feature(s) according to design document and enterprise software standards. Design and develop for deployment on multiple platforms, databases and application servers. Work and collaborate in multi-disciplinary Agile teams, adopting Agile spirit, methodology and tools. Guide and assist the team members with UI development activities Interface with various R&D groups and with support tiers You will have an advantage if you also have: Experience with/knowledge of agile development processes. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7568 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 week ago
5.0 - 8.0 years
15 - 19 Lacs
Pune
Hybrid
So, what’s the role all about? Seeking a skilled and experienced DevOps Engineer in designing, producing, and testing high-quality software that meets specified functional and non-functional requirements within the time and resource constraints given. How will you make an impact? Design, implement, and maintain CI/CD pipelines using Jenkins to support automated builds, testing, and deployments. Manage and optimize AWS infrastructure for scalability, reliability, and cost-effectiveness. To streamline operational workflows and develop automation scripts and tools using shell scripting and other programming languages. Collaborate with cross-functional teams (Development, QA, Operations) to ensure seamless software delivery and deployment. Monitor and troubleshoot infrastructure, build failures, and deployment issues to ensure high availability and performance. Implement and maintain robust configuration management practices and infrastructure-as-code principles. Document processes, systems, and configurations to ensure knowledge sharing and maintain operational consistency. Performing ongoing maintenance and upgrades (Production & non-production) Occasional weekend or after-hours work as needed Have you got what it takes? Experience: 5-8 years in DevOps or a similar role. Cloud Expertise: Proficient in AWS services such as EC2, S3, RDS, Lambda, IAM, CloudFormation, or similar. CI/CD Tools: Hands-on experience with Jenkins pipelines (declarative and scripted). Scripting Skills: Proficiency in either shell scripting or powershell Programming Knowledge: Familiarity with at least one programming language (e.g., Python, Java, or Go). IMP: Scripting/Programming is integral to this role and will be a key focus in the interview process. Version Control: Experience with Git and Git-based workflows. Monitoring Tools: Familiarity with tools like CloudWatch, Prometheus, or similar. Problem-solving: Strong analytical and troubleshooting skills in a fast-paced environment. CDK Knowledge in AWS DevOps. You will have an advantage if you also have: Prior experience in Development or Automation is a significant advantage. Windows system administration is a significant advantage. Experience with monitoring and log analysis tools is an advantage. Jenkins pipeline knowledge What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7318 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 week ago
8.0 - 13.0 years
0 - 0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Job Title Site Reliability Engineer SRE Observability Engineer Shift Type Rotational Shifts including Night Shift and Weekend Availability Experience 7 Years of Exp Job Summary We are looking for a skilled and adaptable Site Reliability Engineer SRE Observability Engineer to join our dynamic project team The ideal candidate will play a critical role in ensuring system reliability scalability observability and performance while collaborating closely with development and operations teams This position requires strong technical expertise problem solving abilities and a commitment to 247 operational excellence Key Responsibilities Site Reliability Engineering Design build and maintain scalable and reliable infrastructure Automate system provisioning and configuration using tools like Terraform Ansible Chef or Puppet Develop tools and scripts in Python Go Java or Bash for automation and monitoring Administer and optimize Linux Unix systems with a strong understanding of TCPIP DNS load balancers and firewalls Implement and manage cloud infrastructure across AWS or Kubernetes Maintain and enhance CICD pipelines using tools like Jenkins ArgoCD Monitor systems using Prometheus Grafana Nagios or Datadog and respond to incidents efficiently Conduct postmortems and define SLAsSLOs for system reliability and performance Plan for capacity and performance using benchmarking tools and implement autoscaling and failover systems Observability Engineering Instrument services with relevant metrics logs and traces using OpenTelemetry Prometheus Jaeger Zipkin etc Build and manage observability pipelines using Grafana ELK Stack Splunk Datadog or Honeycomb Work with timeseries databases eg InfluxDB Prometheus and log aggregation platforms Design actionable s and dashboards to improve system observability and reduce fatigue Partner with developers to promote observability best practices and define key performance indicators KPIs Required Skills Qualifications Proven experience as an SRE or Observability Engineer in complex production environments Handson expertise in LinuxUnix systems and cloud infrastructure AWSKubernetes Strong programming and scripting skills in Python Go Bash or Java Deep understanding of monitoring logging and ing systems Experience with modern Infrastructure as Code and CICD practices Ability to analyze and troubleshoot production issues in realtime Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability Additional Requirements Excellent communication skills to collaborate with crossfunctional teams and stakeholders Flexibility to work in rotational shifts including night shifts and weekends as required by project demands A proactive mindset with a focus on continuous improvement and reliability Skills Mandatory Skills : Ansible, AWS Automation Services, AWS CloudFormation, AWS Code Pipeline, AWS CodeDeploy, AWS DevOps Services
Posted 1 week ago
5.0 - 7.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 7+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 week ago
3.0 - 5.0 years
3 - 8 Lacs
Noida
Work from Office
Role: Azure Devops Engineer Skillset: Azure Devops, CI/CD, Kubernates, Terraform Experience: 4-6 years Location: Noida and Chennai Requirements Automation-first mindset and hands-on in primary knowledge domain Deep understanding of DevOps framework, tools and metrics for build, test and deployment automation – Configuration Management, scalable mode of infrastructure deployment and best practices Able to assess the existing maturity of client in Cloud and DevOps space, perform gap analysis and provide baseline solution Hands-on with language like Unix Shell-scripting or Power Shell Hands-On experience on CI/CD pipeline and Proficient with tools like GitLab’s, GIT, Azure DevOps etc. Hands-On experience with automation tools on the configuration management tool like Ansible/Puppet/ Hands-On experience with containerized deployment and orchestration using AKS(Azure Kubernetes Service) Hands-On experience with infrastructure as a code tool like Terraform , CloudFormation or ARM Template etc. Previous Application Development (Java/.Net) and System Administration experience in Linux Strong experience with project management methodologies and frameworks (Waterfall, Scrum, Kanban) Able to perform client-facing technical consultancy roles, contribute to knowledge base, work in Team Lead role for small-medium sized teams, etc Experience in implementing monitoring and logging solutions for any cloud environments, including ones in a containerized environment Nice to have: Certifications in DevOps and Kubernetes Technologies DevOps principle and Source code branching strategies Scripting Language - Bash, Powershell etc Docker Native Kubernetes and one of the managed kubernetes service AKS IaC Tool – Terraform, ARM Template, Cloud Formation Configuration Management tool – Ansible/Puppet/ CI/CD – GitLab’s, Azure Pipeline Linux – Linux command and it’s directory exposure System Administration experience in Linux Code quality tool - SonarQube Knowledge of Azure cloud services like – Storage, Compute, Networking, Database
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Since its inception in 2003, driven by visionary college students transforming online rent payment, Entrata has evolved into a global leader serving property owners, managers, and residents. Honored with prestigious awards like the Utah Business Fast 50, Silicon Slopes Hall of Fame - Software Company - 2022, Women Tech Council Shatter List, our comprehensive software suite spans rent payments, insurance, leasing, maintenance, marketing, and communication tools, reshaping property management worldwide. Our 2200+ global team members embody intelligence and adaptability, engaging actively from top executives to part-time employees. With offices across Utah, Texas, India, Israel, and the Netherlands, Entrata blends startup innovation with established stability, evident in our transparent communication values and executive town halls. Our product isn't just desirable; it's industry essential. At Entrata, we passionately refine living experiences, uphold collective excellence, embrace > Job Summary Entrata Software is seeking a DevOps Engineer to join our R&D team in Pune, India. This role will focus on automating infrastructure, streamlining CI/CD pipelines, and optimizing cloud-based deployments to improve software delivery and system reliability. The ideal candidate will have expertise in Kubernetes, AWS, Terraform, and automation tools to enhance scalability, security, and observability. Success in this role requires strong problem-solving skills, collaboration with development and security teams, and a commitment to continuous improvement. If you thrive in fast-paced, Agile environments and enjoy solving complex infrastructure challenges, we encourage you to apply! Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins, GitHub Actions, and ArgoCD to enable seamless, automated software deployments. Deploy, manage, and optimize Kubernetes clusters in AWS, ensuring reliability, scalability, and security. Automate infrastructure provisioning and configuration using Terraform, CloudFormation, Ansible, and scripting languages like Bash, Python, and PHP. Monitor and enhance system observability using Prometheus, Grafana, and ELK Stack to ensure proactive issue detection and resolution. Implement DevSecOps best practices by integrating security scanning, compliance automation, and vulnerability management into CI/CD workflows. Troubleshoot and resolve cloud infrastructure, networking, and deployment issues in a timely and efficient manner. Collaborate with development, security, and IT teams to align DevOps practices with business and engineering objectives. Optimize AWS cloud resource utilization and cost while maintaining high availability and performance. Establish and maintain disaster recovery and high-availability strategies to ensure system resilience. Improve incident response and on-call processes by following SRE principles and automating issue resolution. Promote a culture of automation and continuous improvement, identifying and eliminating manual inefficiencies in development and operations. Stay up-to-date with emerging DevOps tools and trends, implementing best practices to enhance processes and technologies. Ensure compliance with security and industry standards, enforcing governance policies across cloud infrastructure. Support developer productivity by providing self-service infrastructure and deployment automation to accelerate the software development lifecycle. Document processes, best practices, and troubleshooting guides to ensure clear knowledge sharing across teams. Minimum Qualifications 3+ years of experience as a DevOps Engineer or similar role. Strong proficiency in Kubernetes, Docker, and AWS. Hands-on experience with Terraform, CloudFormation, and CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, ArgoCD). Solid scripting and automation skills with Bash, Python, PHP, or Ansible. Expertise in monitoring and logging tools such as NewRelic, Prometheus, Grafana, and ELK Stack. Understanding of DevSecOps principles, security best practices, and vulnerability management. Strong problem-solving skills and ability to troubleshoot cloud infrastructure and deployment issues effectively. Preferred Qualifications Experience with GitOps methodologies using ArgoCD or Flux. Familiarity with SRE principles and managing incident response for high-availability applications. Knowledge of serverless architectures and AWS cost optimization strategies. Hands-on experience with compliance and governance automation for cloud security. Previous experience working in Agile, fast-paced environments with a focus on DevOps transformation. Strong communication skills and ability to mentor junior engineers on DevOps best practices. If you're passionate about automation, cloud infrastructure, and building scalable DevOps solutions ,
Posted 1 week ago
6.0 - 9.0 years
7 - 11 Lacs
Hyderabad
Work from Office
As a Senior DevOps Engineer, you will be responsible for enhancing and integrating DevOps practices into our development and operational processes. You will work collaboratively with software development, quality assurance, and IT operations teams to implement CI/CD pipelines, automate workflows, and improve the deployment processes to ensure high-quality software delivery. Key Responsibilities Design and implement CI/CD pipelines for automation of build, test, and deployment processes. Collaborate with development and operations teams to improve existing DevOps practices and workflows. Deploy and manage container orchestration platforms such as Kubernetes and Docker. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Participate in incident response and root cause analysis activities. Establish best practices for DevOps processes, security, and compliance. Qualifications and Experience Bachelor's degree with DevOps certification 7+ years of experience in a DevOps or related role. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI. Developemnt (JAVA or Python ..etc) - Advanced Kubernetes usage and admin - Advanced AI - Intermediate CICD development - Advanced Strong collaboration and communication skills.
Posted 1 week ago
5.0 - 7.0 years
7 - 11 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:installation, configuration and management of Linux /Windows systems Good to have skills:JIRA/Confluence Job Summary As a L2 Cloud Operations Engineer, you will be operating an e-commerce solution built over on-prem and cloud infrastructure. You will be involved in maintaining and improving the clients business platforms and also will be responsible for the site reliability and platform stability. You will be expected to respond to incidents, support on problems, execute changes and be part of a project to improve or reengineer the platform. Roles and Responsibilities Continuous monitoring of the platforms performance and uptime Fast identification and resolution of incidents Resolution of service requests Managing the platform configuration to ensure it is optimized and up to date Improved efficiency by automating routine tasks Professional and Technical Skills You must have a strong technical aptitude and an organized, process driven work ethic. 3.5-5 years of relevant experience with installation, configuration and management of Linux /Windows systems. Strong working experience in managing and maintaining public clouds like AWS, Azure or GCP. Strong experience in setting up and configuring monitoring tools like Prometheus, Grafana, Zabbix etc Strong Experience with installation/configuration of Java application servers such as Jboss/WebLogic/Tomcat and also analyzing application logs, GC logs for troubleshooting performance and functional issues. Hands on experience in cloud provisioning tools like Terraform/CloudFormation will be an added advantage Hands on experience with Docker/Kubernetes will be an added advantage Experience in ELK/Kafka/Openshift/Python script will be an added advantage Good knowledge of SQL and NoSQL databases like MySQL/Oracle/PostgreSQL/DynamoDB/MongoDB/Cassandra/Redis You will have strong written and verbal communications skills and a track record for providing high customer satisfaction. Develop automation scripts as needed to enhance operational efficiencies Has prior experience in supporting Jira/Confluence or any other service management tool Prior experience working in Agile environment will be an advantage. Additional Information (do not remove the hyperlink) Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 1 week ago
7.0 - 12.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Architecture Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Your role will involve collaborating with cross-functional teams, managing the team's performance, and making key decisions. With your expertise in AWS Architecture, you will provide innovative solutions to problems and contribute to the success of multiple teams. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the effort to design, build, and configure applications- Act as the primary point of contact for application-related matters- Oversee the entire application development process- Collaborate with cross-functional teams- Manage the team's performance- Make key decisions to ensure successful implementation- Provide innovative solutions to problems- Contribute to the success of multiple teams Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture- Strong understanding of cloud computing principles and best practices- Experience in designing and implementing scalable and secure AWS solutions- Knowledge of AWS services such as EC2, S3, Lambda, and RDS- Hands-on experience with infrastructure as code tools like CloudFormation or Terraform- Good To Have Skills: Experience with DevOps practices and tools- Recommendation:Familiarity with other cloud platforms like Azure or Google Cloud- Solid grasp of software development methodologies and practices Additional Information:- The candidate should have a minimum of 7.5 years of experience in AWS Architecture- This position is based in Gurugram- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
19947 Jobs | Dublin
Wipro
9475 Jobs | Bengaluru
EY
7894 Jobs | London
Accenture in India
6317 Jobs | Dublin 2
Amazon
6141 Jobs | Seattle,WA
Uplers
6077 Jobs | Ahmedabad
Oracle
5820 Jobs | Redwood City
IBM
5736 Jobs | Armonk
Tata Consultancy Services
3644 Jobs | Thane
Capgemini
3598 Jobs | Paris,France