Jobs
Interviews

17543 Terraform Jobs - Page 30

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 322746BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Data Engineers and DevSecOps Engineers to join our team in building the Enterprise Data Mesh at UBS. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Mesh components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Mesh services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing many services as part of our Data Mesh strategy firmwide to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 322766BR Job Type Full Time Your role Are you innovative and passionate about building secure and reliable solutions? We are looking for Tech Engineers specializing in either DevSecOps, Data Engineering or Full-Stack web development to join our team in building firmwide Data Observability Components on Azure. We are open to adapting the role suited to your career aspirations and skillset. Responsibilities include: Design/document, develop, review, test, release, support Data Observability components/platforms/environments. Contribute to agile ceremonies e.g. daily stand-ups, backlog refinement, iteration planning, iteration reviews, retrospectives. Comply with the firm’s applicable policies and processes. Collaborate with other teams and divisions using Data Observability services, related guilds and other Data Mesh Services teams. Ensure delivery deadlines are met. Your team You will be part of a diverse global team consisting of data scientists, data engineers, full-stack developers, DevSecOps engineers and knowledge engineers within Group CTO working primarily in a local team with some interactions with other teams and divisions. We are providing Data Observability services as part of our firmwide Data Mesh strategy to automate and scale data management to improve time-to-market for data and reduce data downtime. We provide learning opportunities and a varied technology landscape. Technologies include Azure Cloud, AI (ML and GenAI models), web user interface (React), data storage (Postgres, Azure), REST APIs, Kafka, Great Expectations, ontology models. Your expertise Experience in the following (or similar transferrable skills): Hands-on delivery in any of the following (or related): full-stack web development (e.g. React, APIs), data transformations, Spark, python, database design and development in any database, CI/CD pipelines, security risk mitigation, infrastructure as code (e.g. Terraform), monitoring, Azure development. Agile software practices and tools, performance testing, unit and integration testing. Identifying root-causes and designing and implementing the solution. Collaborating with other teams to achieve common goals. Learning and reskilling in new technologies. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for developing and deploying machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms. Translates application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Develops novel ways to use machine learning to solve problems and discover new products. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to supportnamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and explorationemerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 5+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts. Certifications in cloud architecture, ML engineering, or data science specializations, confluence pages, white papers, presentations, test results, technical manuals, formal recommendations and reports. Contributes to the company by creating patents, Application Programming Interfaces (APIs). Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Engineer, Cybersecurity NielsenIQ is maturing its Product Security programs and is recruiting a Product Security Engineer who will be responsible for supporting the rollout of DevSecOps capabilities and practises across all geographies and business units. As the Product Security Engineer, you will be responsible for integration, maintenance and analysis of the tools and technologies used in securing NIQ products/applications throughout their development. You will oversee application security capabilities within a multi-national matrixed environment. The Product Security Engineer will have the opportunity to replace the current Static and Dynamic Application Security Tool and advocate for the tech stack used for monitoring. This position will involve working closely with development/engineering teams, business units, technical and non-technical stakeholders, educating them and driving the adoption and maturity of the NIQ’s Product & Application Security programs. Responsibilities Collaborate within Product Security Engineering and Cybersecurity teams to support delivery of its strategic initiatives. Work with engineering teams (Developers, SREs & QAs) to ensure that products are secure on delivery and implement provided security capabilities. Actively contribute to building and maintaining Product Security team security tools and services, including integrations security tools in the CI/CD process Report on security key performance indicators (KPIs) to drive improvements across engineering teams’ security posture. Contribute to Product Security Engineering team security education program and become an advocate within the organization’s DevSecOps and application security community of practice. Review IaaS / PaaS architecture roadmaps for the cloud to and recommend baseline security controls and hardening requirements, supporting threat modelling of NIQ’s products. Qualifications 3+ years of experience working in a technical/hands-on application security, development, or DevOps professional environment. Working Knowledge of web stack, web security and common vulnerabilities (e.g. SQLi, XSS, & beyond.) Good coding experience (Python is most desirable, or similar programming language). Experience deploying containers using CI/CD pipeline tools like GitHub Actions, Gitlab Pipelines, Jenkins, and Terraform or Helm. Self-starter, technology and security hobbyist, enthusiast. Lifelong learner with endless curiosity. Bonus Points if you: Have experience building serverless functions in Cloud environments. Have knowledge of Cloud Workload Protection. Experience using SAST and DAST tools. Demonstrated engagement in security conferences, training, learning, associations is highly desired and fully supported. Ability to think like a hacker. Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Role Grade Level (for internal use): 09 The Team The current team is composed of highly skilled engineers with solid development background who build and manage tier-0 platforms in AWS cloud environments. In their role, they will play a pivotal part in shaping the platform architecture and engineering. Their additional tasks include, exploring new innovative tools that benefit the organization needs, developing services and tools around the platform, establishing standards, creating reference implementations, and providing support for application integrations on need basis. The Impact This role is instrumental in constructing and maintaining dependable production systems within cloud environments. The team bears the crucial responsibility for ensuring high availability, minimizing latency, optimizing performance, enhancing efficiency, overseeing change management, implementing robust monitoring practices, responding to emergencies, and strategically planning for capacity. The impact of this team is pivotal for the organization, given its extensive application portfolio, necessitating a steadfast commitment to achieving and maintaining a 99.9% uptime, thus ensuring the reliability and stability of the firm's digital infrastructure. What’s In It For You S&P Global is an employee friendly company with various benefits and with primary focus on skill development. The technology division has a wide variety of yearly goals that help the employee train and certify in niche technologies like: Generative AI, Transformation of applications to CaaS, CI/CD/CD gold transformation, Cloud modernization, Develop leadership skills and business knowledge training. Essential Duties & Responsibilities As part of a global team of Engineers, deliver highly reliable technology products. Strong focus on developing robust solutions meeting high-security standards. Build and maintain new applications/platforms for growing business needs. Design and build future state architecture to support new use cases. Ensure scalable and reusable architecture as well as code quality. Integrate new use cases and work with global teams. Work with/support users to understand issues, develop root cause analysis and work with the product team for the development of enhancements/fixes. Become an integral part of a high performing global network of engineers/developers working from Colorado, New York, and India to help ensure 24x7 reliability for critical business applications. As part of a global team of engineers/developers, deliver continuous high reliability to our technology services. Strong focus towards developing permanent fixes to issues and heavy automation of manual tasks. Provide technical guidance to junior level resources. Works on analyzing/researching alternative solutions and developing/implementing recommendations accordingly. Qualifications Required: Bachelor / MS degree in Computer Science, Engineering or a related subject Good written and oral communication skills. Must have 3+ years of working experience in Java with Spring technology Must have API development experience Work experience with asynchronous/synchronous messaging using MQ, etc. Ability to use CICD flow and distribution pipelines to deploy applications Working experience with DevOps tools such as Git, Azure DevOps, Jenkins, Maven Solid understanding of Cloud technologies and managing infrastructures Experience in developing, deploying & debugging cloud applications Strong knowledge of Functional programming, Linux etc Nice To Have Experience in building single-page applications with Angular or ReactJS in conjunction with Python scripting. Working experience with API Gateway, Apache and Tomcat server, Helm, Ansible, Terraform, CI/CD, Azure DevOps, Jenkins, Git, Splunk, Grafana, Prometheus, Jaeger(or other OTEL products), Flux, LDAP, OKTA, Confluent Platform, Active MQ, AWS, Kubernetes Location: Hyderabad, India Hybrid model: twice a week work from office is mandatory. Shift time: 12 pm to 9 pm IST. About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311026 Posted On: 2025-07-30 Location: Hyderabad, Telangana, India

Posted 6 days ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Role Description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures Of Outcomes 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Resolution Outputs Expected: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation Install and configure tools software and patches Runbook/KB Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation Coordinate and monitor IT process implementation within the function Compliance Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments Role - Cloud Engineer Primary Responsibilities Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas Design and implement automation for self-service adoption, access provisioning, and compliance monitoring Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows Participate in Agile sprints, sprint planning, and cross-team technical initiatives Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support GitHub secrets scanning and remediation with integration to HashiCorp Vault Lifecycle management of developer access across tools like GitHub and Teleport Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience Proficiency with Terraform (IaC) and Ansible Strong scripting experience in Python, PowerShell, or Bash Experience operating in cloud environments (AWS, Azure, or GCP) Familiarity with secure development practices and DevSecOps tooling Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes Strong cross-functional communication skills with technical and non-technical stakeholders Ability to work independently while knowing when to escalate or align with other engineers or teams. Comfort managing complexity and ambiguity in a fast-paced environment Ability to balance short-term support needs with longer-term infrastructure automation and optimization. Proactive, service-oriented mindset focused on enabling secure and scalable development Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation

Posted 6 days ago

Apply

7.0 years

0 Lacs

Patna, Bihar, India

On-site

Roles and Responsibilities: Ensure the reliability, performance, and scalability of our database infrastructure. Work closely with application teams to ship solutions that integrate seamlessly with our database systems. Analyze solutions and implement best practices for supported data stores (primarily MySQL and PostgreSQL). Develop and enforce best practices for database security, backup, and recovery. Work on the observability of relevant database metrics and make sure we reach our database objectives. Provide database expertise to engineering teams (for example, through reviews of database migrations, queries, and performance optimizations). Work with peers (DevOps, Application Engineers) to roll out changes to our production environment and help mitigate database-related production incidents. Work on automation of database infrastructure and help engineering succeed by providing self-service tools. OnCall support on rotation with the team. Support and debug database production issues across services and levels of the stack. Document every action so your learning turns into repeatable actions and then into automation. Perform regular system monitoring, troubleshooting, and capacity planning to ensure scalability. Create and maintain documentation on database configurations, processes, and procedures. Mandatory Qualifications: Have at least 7 years of experience running MySQL/PostgreSQL databases in large environments. Awareness of cloud infrastructure (AWS/GCP). Have knowledge of the internals of MySQL/PostgreSQL. Knowledge of load balancing solutions such as ProxySQL to distribute database traffic efficiently across multiple servers. Knowledge of tools and methods for monitoring database performance. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills to work effectively within cross-functional teams. Knowledge of caching (Redis / Elasticache) Knowledge of scripting languages (Python) Knowledge of infrastructure automation (Terraform/Ansible) Familiarity with DevOps practices and CI/CD pipelines.

Posted 6 days ago

Apply

0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Req ID: 332013 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a AWS Devops Engineer to join our team in Pune, Mahārāshtra (IN-MH), India (IN). Key Responsibilities Development & Build: Build and maintain a robust, scalable real-time data streaming platform leveraging AWS and Confluent Cloud Infrastructure. AWS Services: strong knowledge of AWS services, particularly those relevant to stream processing and serverless components like Lambda Functions. Performance Monitoring: Continuously monitor and troubleshoot streaming platform performance issues to ensure optimal functionality. Collaboration: Work closely with cross-functional teams to onboard various data products into the streaming platform and support existing implementations. Version Control: Manage code using Git, ensuring best practices in version control are followed. Infrastructure as Code (IaC): Apply expertise in Terraform for efficient infrastructure management. CI/CD Practices: Implement robust CI/CD pipelines using GitHub Actions to automate deployment workflows. Monitor expiration of service principal secrets or certificates, use Azure DevOps REST API to automate renewal, implement alerts and documentation for debugging failed connections. Mandatory Skillsets The candidate must have: Strong proficiency in AWS services , including IAM Roles, Access Control RBAC, S3, Containerized Lambda Functions, VPC, Security Groups, RDS, MemoryDB, NACL, CloudWatch, DNS, Network Load Balancer, Directory Services and Identity Federation, AWS Tagging Configuration, Certificate Management, etc. Hands-on experience in Kubernetes (EKS) , with expertise in Imperative and Declarative approaches for managing resources/services like Pods, Deployments, Secrets, ConfigMaps, DaemonSets, Services, IRSA, Helm Charts, and deployment tools like ArgoCD. Expertise in Datadog , including integration, monitoring key metrics and logs, and creating meaningful dashboards and alerts. Strong understanding of Docker , including containerisation and image creation. Excellent programming skills in Python and Go , capable of writing efficient scripts. Familiarity with Git concepts for version control. Deep knowledge of Infrastructure as Code principles , particularly Terraform. Experience with CI/CD tools , specifically GitHub Actions. Understanding of security best practices , including knowledge of Snyk, Sonar Cloud, and Code Scene. Nice-to-Have Skillsets Prior experience with streaming platforms, particularly Apache Kafka (including producer and consumer applications). Knowledge of unit testing around Kafka topics, consumers, and producers. Experience with Splunk integration for logging and monitoring. Familiarity with Software Development Life Cycle (SDLC) principles. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Nice to meet you! We’re a leader in data and AI. Through our software and services, we inspire customers around the world to transform data into intelligence - and questions into answers. We’re also a debt-free multi-billion-dollar organization on our path to IPO-readiness. If you're looking for a dynamic, fulfilling career coupled with flexibility and world-class employee experience, you'll find it here. About The Job Role : DevOps Engineer Responsibilities Enable stream-aligned DevOps teams to deliver rapid value by delivering platforms they can build on Enable Continuous Integration and Delivery by providing a standardized pipeline experience for teams Design novel solutions using both public and private cloud platforms to solve business needs Work with Information Security and others to understand what needs to be handled by the platforms we support Construct Infrastructure as Code routines that are leveraged to ensure cloud services have configuration needed for ongoing support Support legacy environments while we work with teams in migrating to DevOps practices and cloud adoption What We’re Looking For You’re curious, passionate, authentic, and accountable. These are our values and influence everything we do. You have a Bachelor's degree in computer science, information technology, or a similar quantitative field. You have a passion for automation and empowering others through self-help You have experience writing scripts and/or APIs in a modern language (Python, Go, PowerShell, etc.) You have experience delivering solutions in one or more public clouds, i.e., Microsoft Azure, Amazon AWS, etc. You have familiarity with Continuous Integration and Continuous Delivery (CI/CD) You have familiarity with fundamental cloud, security, networking, and distributed computing environment concepts You have familiarity administering one of the following platforms: Apache, Atlassian Bamboo, Boomi, Cloud Foundry, Harbor, RabbitMQ, or Tomcat The nice to haves Experience providing services as a platform to enable other teams to build from Experience with monitoring and writing self-healing routines to ensure platform uptime Experience with Python or PowerShell: writing scripts, applications, or APIs Experience with Ansible, Terraform, or other Infrastructure as Code/Configuration Management applications Experience with developing/managing applications in Microsoft’s Azure Cloud Experience with git, GitHub, and GitHub Actions for source control and Continuous Integration/Delivery Experience with supporting applications in an enterprise environment on both Linux and Windows Experience working in an Agile sprint-based environment Knowledge with the use and/or administration in Kubernetes Other Knowledge, Skills, And Abilities Strong oral and written communication skills Strong prioritization and analytical skills Ability to work independently and as part of a global team Ability to manage time across multiple projects Ability to communicate designs and decisions to peers and internal customers Ability to produce clear and concise system and process documentation Ability and willingness to participate in an afterhours on-call rotation Required Skills : Apache, Atlassian Bamboo, Boomi, Cloud Foundry, Harbor, RabbitMQ, Tomcat , Python , Powershell , CI\CD Cloud : Azure Experience : 3 to 7 years Diverse and Inclusive At SAS, it’s not about fitting into our culture – it’s about adding to it. We believe our people make the difference. Our diverse workforce brings together unique talents and inspires teams to create amazing software that reflects the diversity of our users and customers. Our commitment to diversity is a priority to our leadership, all the way up to the top; and it’s essential to who we are. To put it plainly: you are welcome here.

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role : DevOps Engineer Responsibilities Enable stream-aligned DevOps teams to deliver rapid value by delivering platforms they can build on Enable Continuous Integration and Delivery by providing a standardized pipeline experience for teams Design novel solutions using both public and private cloud platforms to solve business needs Work with Information Security and others to understand what needs to be handled by the platforms we support Construct Infrastructure as Code routines that are leveraged to ensure cloud services have configuration needed for ongoing support Support legacy environments while we work with teams in migrating to DevOps practices and cloud adoption What We’re Looking For You’re curious, passionate, authentic, and accountable. These are our values and influence everything we do. You have a Bachelor's degree in computer science, information technology, or a similar quantitative field. You have a passion for automation and empowering others through self-help You have experience writing scripts and/or APIs in a modern language (Python, Go, PowerShell, etc.) You have experience delivering solutions in one or more public clouds, i.e., Microsoft Azure, Amazon AWS, etc. You have familiarity with Continuous Integration and Continuous Delivery (CI/CD) You have familiarity with fundamental cloud, security, networking, and distributed computing environment concepts You have familiarity administering one of the following platforms: Apache, Atlassian Bamboo, Boomi, Cloud Foundry, Harbor, RabbitMQ, or Tomcat Experience with Python or PowerShell: writing scripts, applications, or APIs Experience with Ansible, Terraform, or other Infrastructure as Code/Configuration Management applications Experience with developing/managing applications in Microsoft’s Azure Cloud Experience with git, GitHub, and GitHub Actions for source control and Continuous Integration/Delivery The nice to haves Experience providing services as a platform to enable other teams to build from Experience with monitoring and writing self-healing routines to ensure platform uptime Experience with supporting applications in an enterprise environment on both Linux and Windows Experience working in an Agile sprint-based environment Knowledge with the use and/or administration in Kubernetes Other Knowledge, Skills, And Abilities Strong oral and written communication skills Strong prioritization and analytical skills Ability to work independently and as part of a global team Ability to manage time across multiple projects Ability to communicate designs and decisions to peers and internal customers Ability to produce clear and concise system and process documentation Ability and willingness to participate in an afterhours on-call rotation Required Skills : Apache, Atlassian Bamboo, Boomi, Cloud Foundry, Harbor, RabbitMQ, Tomcat , Python , Powershell , CI\CD ,Python ,Docker , FastAPI Cloud : Azure , AWS Experience : 3 to 7 years Why SAS We love living the #SASlife and believe that happy, healthy people have a passion for life, and bring that energy to work. No matter what your specialty or where you are in the world, your unique contributions will make a difference. Our multi-dimensional culture blends our different backgrounds, experiences, and perspectives. Here, it isn’t about fitting into our culture, it’s about adding to it - and we can’t wait to see what you’ll bring.

Posted 6 days ago

Apply

6.0 years

0 Lacs

India

On-site

Job Summary: As a Cloud Infrastructure Engineer specializing in Microsoft Azure , Active Directory (AD) , and Azure Virtual Desktop (AVD) , you will design, implement, manage, and support Azure infrastructure solutions for our clients. You’ll collaborate with stakeholders to deliver secure, scalable, and high-performance environments, aligned with organizational and customer requirements. Key Responsibilities: Design, deploy, and maintain Azure infrastructure, including Virtual Networks, NSGs, VPNs, and compute workloads. Implement and manage Active Directory and Azure AD, including Group Policies, domain services, and identity governance. Deploy and support Azure Virtual Desktop (AVD) environments, optimizing user experience and security Troubleshoot and resolve issues across Azure resources, networking, identity, and access. Collaborate with project teams and customers to gather requirements, create documentation, and deliver technical solutions. Monitor performance, security, and compliance of Azure environments. Research and recommend improvements to enhance reliability, security, and cost-efficiency Required Skills & Experience: Strong hands-on experience with Microsoft Azure services (Compute, Networking, Storage, Security). Deep understanding of Active Directory (on-prem and Azure AD), including AD DS, ADFS, and hybrid identity. Practical experience with Azure Virtual Desktop (AVD) setup and optimization. Knowledge of Infrastructure as Code (IaC): Terraform, ARM, or Bicep. - Familiarity with DevOps practices: CI/CD, version control systems (Git), and automation scripting (PowerShell, Bash, Python). Experience with monitoring, logging, and security tools in Azure. Understanding of cloud security principles, identity and access management, and compliance frameworks. Ability to collaborate effectively with technical and non-technical teams. Good written and verbal communication skills Preferred Qualifications: Bachelor’s degree in computer science, Information Technology, or a related field. 4–6 years of relevant experience in Azure infrastructure and identity management. Certifications (preferred but not mandatory): Microsoft Certified: Azure Administrator Associate / Azure Solutions Architect Microsoft Certified: Identity and Access Administrator Microsoft Certified: Azure Virtual Desktop Specialty Good to have: Automate infrastructure deployment using Infrastructure as Code (IaC) tools like Terraform, Bicep, or ARM templates. Contribute to CI/CD pipelines for infrastructure and application deployments.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Ascentt is building cutting-edge data analytics & AI/ML solutions for global automotive and manufacturing leaders. We turn enterprise data into real-time decisions using advanced machine learning and GenAI. Our team solves hard engineering problems at scale, with real-world industry impact. We’re hiring passionate builders to shape the future of industrial intelligence. Azure Cloud Engineer Experience 5+ years of experience managing cloud infrastructure, preferably in Azure. Location: Indore/Pune Job Description We are seeking an experienced and proactive Azure Cloud Engineer to join our cloud infrastructure team. The ideal candidate will be responsible for designing, implementing, managing, and optimizing Azure cloud solutions, ensuring high availability, security, and performance of our cloud-based systems. This role will involve close collaboration with DevOps, security, application development, and operations teams. Key Duties And Tasks Design, deploy, and manage Azure infrastructure using best practices (IaaS, PaaS, containers, serverless). Implement and maintain Azure services such as VMs, VNets, Azure AD, Storage, AKS, App Services, Functions, Event Grid, Logic Apps, etc. Automate infrastructure provisioning using ARM templates, Bicep, or Terraform. Develop and manage CI/CD pipelines using Azure DevOps, GitHub Actions, or other DevOps tools. Ensure cloud security posture by implementing RBAC, NSGs, firewalls, policies, and identity protection. Monitor system performance, health, and costs using Azure Monitor, Log Analytics, and Cost Management. Troubleshoot and resolve issues related to cloud infrastructure and deployments. Stay current with Azure features and best practices and propose improvements or migrations as needed. Qualification And Skills Required 5+ years of experience managing cloud infrastructure, preferably in Azure. Strong Hands-on Experience With Azure Compute (VMs, Scale Sets, Functions) Azure Networking (VNet, Load Balancers, VPN Gateway, ExpressRoute) Azure Identity (Azure AD, RBAC, Managed Identities) Azure Storage and Databases Azure Kubernetes Service (AKS) or containers (Docker) Experience with infrastructure-as-code (Terraform, Bicep, or ARM templates). Knowledge of CI/CD and DevOps principles. Scripting in PowerShell, Bash, or Python. Familiarity with monitoring/logging tools like Azure Monitor, Application Insights, or Prometheus/Grafana. Experience with Git-based version control systems. Technical Skills Proven experience in security architecture and designing, building, and deploying secure cloud workloads. Expertise in IAC, Terraform, and scripting languages (Git, PowerShell, Terraform, Jenkins, Python, Bash). Experience in a DevOps environment with knowledge of Continuous Integration, Containers, and DAST/SAST tools. Strong knowledge of security technologies, identity and access management, and containerized security models. Experience with monitoring and alerting solutions for critical infrastructure. Good to have: Experience with distributed systems, Linux, CDNs, HTTP, TCP/IP basics, database and SQL skills, Rest API, microservices-based development, and automation experience with Kubernetes and Docker. Experience with hybrid cloud setups or migrations from on-prem to Azure. Familiarity with governance tools like Azure Policy, Blueprints, and Cost Management. Exposure to Microsoft Defender for Cloud or Sentinel for security monitoring. Experience with Databricks, Glue, Athena, EMR, Data Lake and related solutions and services. Certifications/Licenses Azure certifications such as AZ-104 (Azure Administrator), AZ-305 (Solutions Architect), or AZ-400 (DevOps Engineer). Education Bachelor's degree in Computer Science, Information Technology, or related field.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Ascentt is building cutting-edge data analytics & AI/ML solutions for global automotive and manufacturing leaders. We turn enterprise data into real-time decisions using advanced machine learning and GenAI. Our team solves hard engineering problems at scale, with real-world industry impact. We’re hiring passionate builders to shape the future of industrial intelligence. Cloud Engineer Experience 5 Years Location: Indore/Pune Job Description (Summary Of Responsibilities) Seeking a Cloud Engineer to design, deploy, and manage cloud infrastructure on Cloud while supporting development teams with scalable solutions. Primary experience on AWS is needed, and additionally other cloud experience on GCP/Azure is preferred. Key Duties Architectural Design: Lead the design and implementation process for AWS architectures, ensuring alignment with business goals and compliance with security standards. Collaborate with cross-functional teams to provide architectural guidance. Security Architecture: Utilize a security-first approach to design and implement robust security architectures for AWS solutions. Mitigate security risks and ensure the confidentiality, integrity, and availability of confidential data. Collaboration: Work closely with the cross functional teams, contributing to the security, development and optimization of cloud platforms. Collaborate on strategic initiatives, ensuring alignment with cloud strategy and best practices. Infrastructure as Code (IAC): Design, develop, and maintain scalable, resilient cloud-based infrastructure using an Infrastructure as Code (IAC) approach. Terraform/CloudFormation Expertise: Enhance and extend Terraform/CloudFormation configurations for efficient management of AWS resources. Scripting and Automation: Utilize expertise in Git, PowerShell, Terraform, Jenkins, Python, and Bash scripting to automate processes and enhance efficiency. DevOps Environment: Work within a DevOps environment, leveraging knowledge of Continuous Integration, Containers, and DAST/SAST tools. Security Technologies: Apply broad knowledge of security technologies landscape, emphasizing identity and access management, application and data security, and containerized security models. Monitoring and Alerting Solutions: Implement and optimize monitoring and alerting solutions for critical infrastructure. Contribution to Platform Architecture: Actively contribute to platform architecture, design discussions, and security initiatives. Qualifications And Skills Required 5 years of Multi Cloud experience with core services. Kubernetes/Docker and networking knowledge and experience Proficiency in Terraform and scripting (Python/Bash) Experience with CI/CD tools and cloud migrations Experience with Github Education Bachelor's degree in Computer Science, Information Technology, or related field. Certifications/Licenses AWS Solution Architect Technical Skills Proven experience in security architecture and a minimum of 5 years in designing, building, and deploying secure cloud workloads. Expertise in IAC, Terraform/CloudFormation, and scripting languages (Git, PowerShell, Terraform, Jenkins, Python, Bash). Experience in a DevOps environment with knowledge of Continuous Integration, Containers, and DAST/SAST tools. Strong knowledge of security technologies, identity and access management, and containerized security models. Experience with monitoring and alerting solutions for critical infrastructure. Good to have: Experience with distributed systems, Linux, CDNs, HTTP, TCP/IP basics, database and SQL skills, Rest API, microservices-based development, and automation experience with Kubernetes and Docker. Experience with Databricks, Glue, Athena, EMR, Data Lake and related solutions and services.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who You'll Work With You are someone who thrives in a high-performance environment, bringing a growth mindset and entrepreneurial spirit to tackle meaningful challenges that have a real impact. In return for your drive, determination, and curiosity, we’ll provide the resources, mentorship, and opportunities to help you quickly broaden your expertise, grow into a well-rounded professional, and contribute to work that truly makes a difference. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. Exceptional benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your Impact You will be having deep experiences in practicing, role modeling and coaching teams in key areas of SRE related competencies Culture and Organization: You will champion CI/CD practices, the concept of error budgets and blameless post mortems. You will continuously help to remove team boundaries (dev, ops, others). Your advanced knowledge within the SRE chapter and practice will contribute back to the community within McK and beyond. You will be a full-stack engineer with DR/BCP experience, proficient in cloud-native models and in reliability engineering advisory, and have a strong knowledge of loosely coupled API based component architecture models. You will be proficient in SCM and CI/CD tooling and practices for container workloads, a variety of related developer workflows and principles, and master multiple programming and IaC languages. You will be a competent enabler of automated zero-downtime deployments. You will bring expertise in TDD principles and practices as well as key test automation tools and frameworks. You are well versed in chaos-engineering practices and in wheels-of-misfortune exercises. You will be seasoned in outcome-centric monitoring / measurement (cloud native log management and monitoring tools, SLOs, SLIs, error budgets, toil budgets etc. You will bring expertise in stakeholder specific reporting as well. You will work with our Secure Foundations - MCS team, which is part of McKinsey’s Tech Ecosystem organization, developing new products/services and integrating them into our client work. Our company is moving fast from the traditional IT world to a Digital era embracing Agile principles. We are looking for highly skilled developers with an SRE mindset to help us with this transformation. You will work in small teams (incl. product managers, developers and operations people) in a highly collaborative way, use the latest technologies and enjoy seeing the direct impact from your work. You will combine ‘Agile’ with expertise in cloud, big data and mobile to create and maintain custom solutions, in a way consistent with SRE principles, that will help clients increase productivity and make timely decisions. This includes, but is not limited to: Development, implementation and operation of IT systems, processes supporting SaaS applications and platforms, automation of provisioning, quality controls, security auditing and maintenance, and continuous measurement and improvement of efficiency of operational activities and resources. Your Qualifications and Skills 5+ years of experience with software engineering best practice Proficiency in one or more programming languages, such as Python, JavaScript, Golang, or Ruby. Hands-on experience implementing infrastructure as code using Terraform, or similar automation tools like Ansible and CloudFormation Experience designing and building CI/CD pipelines using tools like GitHub Actions, ArgoCD, CircleCI, or Jenkins along with package management tools like Jfrog or Nexus Experience with public cloud environments, specifically AWS and either Azure or Google Cloud Platform (GCP). Expertise with container technologies and orchestration tools, including Docker, Kubernetes, Helm, and service mesh solutions such as Linkerd or Istio Experience with infrastructure and reliability testing frameworks such as Test-Kitchen, AWSpec and InSpec Experience in managing front-end and back-end workloads such as React, TypeScript, Python, Node.js, Nginx, and API management tools like Apigee and AWS API Gateway Proficiency with databases such as Neo4j, Redis, PostgreSQL, and MongoDB.Familiarity with monitoring and logging tools such as Dynatrace, Splunk, CloudWatch, and other similar platforms like ELK, Prometheus, or Grafana Expertise in networking concepts, including prior experience managing CDN+WAF configurations in Akamai, Cloudflare, AWS CloudFront, and experience with VPCs, Load Balancers, and SSH tunnels Experience with Okta, Azure AD, Ping Identity, and other OIDC/OAuth2 providers and Implementing and managing RBAC for least-privilege access Proficiency with HashiCorp Vault for managing secrets and implementing token rotation Experience with SOC 2 audits, vulnerability management, and SSL certificate management Strong skills in developing technical documentation such as architecture diagrams, runbooks, and technical documents, with experience in complex platform migrations and managing multiple workstreams

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Ascentt is building cutting-edge data analytics & AI/ML solutions for global automotive and manufacturing leaders. We turn enterprise data into real-time decisions using advanced machine learning and GenAI. Our team solves hard engineering problems at scale, with real-world industry impact. We’re hiring passionate builders to shape the future of industrial intelligence. Cloud Engineer Experience 5 Years Location: Indore/Pune Job Description (Summary Of Responsibilities) Seeking a Cloud Engineer to design, deploy, and manage cloud infrastructure on Cloud while supporting development teams with scalable solutions. Primary experience on AWS is needed, and additionally other cloud experience on GCP/Azure is preferred. Key Duties Architectural Design: Lead the design and implementation process for AWS architectures, ensuring alignment with business goals and compliance with security standards. Collaborate with cross-functional teams to provide architectural guidance. Security Architecture: Utilize a security-first approach to design and implement robust security architectures for AWS solutions. Mitigate security risks and ensure the confidentiality, integrity, and availability of confidential data. Collaboration: Work closely with the cross functional teams, contributing to the security, development and optimization of cloud platforms. Collaborate on strategic initiatives, ensuring alignment with cloud strategy and best practices. Infrastructure as Code (IAC): Design, develop, and maintain scalable, resilient cloud-based infrastructure using an Infrastructure as Code (IAC) approach. Terraform/CloudFormation Expertise: Enhance and extend Terraform/CloudFormation configurations for efficient management of AWS resources. Scripting and Automation: Utilize expertise in Git, PowerShell, Terraform, Jenkins, Python, and Bash scripting to automate processes and enhance efficiency. DevOps Environment: Work within a DevOps environment, leveraging knowledge of Continuous Integration, Containers, and DAST/SAST tools. Security Technologies: Apply broad knowledge of security technologies landscape, emphasizing identity and access management, application and data security, and containerized security models. Monitoring and Alerting Solutions: Implement and optimize monitoring and alerting solutions for critical infrastructure. Contribution to Platform Architecture: Actively contribute to platform architecture, design discussions, and security initiatives. Qualifications And Skills Required 5 years of Multi Cloud experience with core services. Kubernetes/Docker and networking knowledge and experience Proficiency in Terraform and scripting (Python/Bash) Experience with CI/CD tools and cloud migrations Experience with Github Education Bachelor's degree in Computer Science, Information Technology, or related field. Certifications/Licenses AWS Solution Architect Technical Skills Proven experience in security architecture and a minimum of 5 years in designing, building, and deploying secure cloud workloads. Expertise in IAC, Terraform/CloudFormation, and scripting languages (Git, PowerShell, Terraform, Jenkins, Python, Bash). Experience in a DevOps environment with knowledge of Continuous Integration, Containers, and DAST/SAST tools. Strong knowledge of security technologies, identity and access management, and containerized security models. Experience with monitoring and alerting solutions for critical infrastructure. Good to have: Experience with distributed systems, Linux, CDNs, HTTP, TCP/IP basics, database and SQL skills, Rest API, microservices-based development, and automation experience with Kubernetes and Docker. Experience with Databricks, Glue, Athena, EMR, Data Lake and related solutions and services.

Posted 6 days ago

Apply

6.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: DevOps Engineer (Python Scripting) Experience: 6 to 12 years Work Locations: Ramanujan IT City, Tharamani, Chennai & MindSpace Hi-Tech City, Madhapur, Hyderabad Work Model: Hybrid Time Zone : 3PM to 12AM IST (Both the way cab will be provided) Website Address: https://www.thryvedigital.com/ Parent Organization: https://www.highmarkhealth.org/hmk/index.shtml Job Description: Developing appropriate DevOps channels throughout the organization. Evaluating, implementing, and streamlining DevOps practices. Establishing a continuous build environment to accelerate software deployment and development processes. Engineering general and effective processes. Helping operation and developers’ teams to solve their problems. Supervising, Examining and Handling technical operations. Providing a DevOps Process and Operations. Capacity to handle teams with leadership attitude. Previous experience working on a 24x7 cloud or SaaS operation team. Experience with infrastructure management and monitoring. Strong Knowledge in GITLab SaaS, CI CD using Git Lab YML, GCP, Terraform, Python, Unix, Openshift Maven, Gradle based build artifacts. Strong knowledge of DevOps Platform tooling ( Chef, Puppet, and Docker.) Working knowledge of automation service provisioning and middleware configuration. Ability to work independently and as part of a team. Strong analytical skills. Exposure to CI CD implementations for Mainframe , Java & Database applications. Interested candidates, please share your updated resume to prithiv.muralibabu@thryvedigital.com . Referrals are welcome.

Posted 6 days ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At Broadridge, we've built a culture where the highest goal is to empower others to accomplish more. If you’re passionate about developing your career, while helping others along the way, come join the Broadridge team. Key Responsibilities Analyze and provide feedback on system infrastructure, assisting with design and implementation Develop fully automated pipelines for building, deploying, and testing applications across multiple environments Automate infrastructure provisioning, deployment, and delivery in both cloud and on-premises environments Become familiar with the product codebase and architecture, and collaborate with hardware and software vendors for support globally Participate in cross-functional projects, working collaboratively with international teams to address issues, release schedules, and system administration matters Lead by example through professional conduct and actions Required Experience And Skills Hands-on experience with system automation using Python, PowerShell, or automation frameworks such as Chef or Terraform to deliver infrastructure-as-code solutions Ability to containerize applications and manage artifact repositories Strong troubleshooting skills for deployment issues, with a focus on automation over manual intervention Experience with CI/CD workflows and tools (such as Jenkins or GitLab) Solid understanding of networking concepts and principles, including application routing Administration or engineering experience with Linux, Windows, or similar operating systems Demonstrated strong operations and service delivery skills, working effectively with distributed (onshore and offshore) teams Proficiency in scripting languages such as PowerShell, Groovy, Python, or Ansible to automate manual tasks Experience in dynamic Agile environments, including effective project planning and estimation to adapt to evolving requirements and ensure timely delivery Familiarity with cloud-based solutions, preferably AWS or GCP Familiarity with version control systems, such as Git Demonstrated strong written and presentation skills to effectively communicate technical concepts and project updates to team members, stakeholders and senior management Preferred Qualifications Relevant cloud certifications (AWS, GCP, Azure, etc.) Linux or Windows certifications Experience supporting financial systems in a global computing environment Proven record of success in a matrix/multicultural organization with demonstrated increasing responsibilities Experience tuning and troubleshooting for performance Experience supporting and troubleshooting trading applications, including high availability, disaster recovery, vulnerability management, incident management, patching, and change We are dedicated to fostering a collaborative, engaging, and inclusive environment and are committed to providing a workplace that empowers associates to be authentic and bring their best to work. We believe that associates do their best when they feel safe, understood, and valued, and we work diligently and collaboratively to ensure Broadridge is a company—and ultimately a community—that recognizes and celebrates everyone’s unique perspective.

Posted 6 days ago

Apply

4.0 years

0 Lacs

Delhi, India

On-site

About Us Bain & Company is a global management consulting that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who You Will Work With Pyxis leverages a broad portfolio of 50+ alternative datasets to provide real-time market intelligence and customer insights through a unique business model that enables us to provide our clients with competitive intelligence unrivaled in the market today. We provide insights and data via custom one-time projects or ongoing subscriptions to data feeds and visualization tools. We also offer custom data and analytics projects to suit our clients’ needs. Pyxis can help teams answer core questions about market dynamics, products, customer behavior, and ad spending on Amazon with a focus on providing our data and insights to clients in the way that best suits their needs. Refer to: www.pyxisbybain.com What You’ll Do Setting up tools and required infrastructure Defining and setting development, test, release, update, and support processes for DevOps operation Have the technical skill to review, verify, and validate the software code developed in the project. Troubleshooting techniques and fixing the code bugs Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Selecting and deploying appropriate CI/CD tools Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) Mentoring and guiding the team members Managing periodic reporting on the progress to the management About You A Bachelor’s or Master’s degree in Computer Science or related field 4 + years of software development experience with 3+ years as a devops engineer High proficiency in cloud management (AWS heavily preferred) including Networking, API Gateways, infra deployment automation, and cloud ops Knowledge of Dev Ops/Code/Infra Management Tools: (GitHub, SonarQube, Snyk, AWS X-ray, Docker, Datadog and containerization) Infra automation using Terraform, environment creation and management, containerization using Docker Proficiency with Python Disaster recovery, implementation of high availability apps / infra, business continuity planning What Makes Us a Great Place To Work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.

Posted 6 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description | DevOps Engineer | Offshore/India We are seeking a skilled DevOps Engineer with expertise in AWS services, Docker, and Groovy scripting to design, implement, and maintain scalable AWS DevOps Pipeline using AWS services like ECS and EC2. The ideal candidate will have experience in automating CI/CD pipelines, managing containerized applications, and optimizing deployment processes. Strong collaboration and communication skills are required for effective teamwork across development, QA, and operations teams. Experience in implementing ECS disaster recovery strategies to ensure high availability is essential. DevOps Engineer Location: Offshore / Onshore Experience: 3–6 years Key Responsibilities: Containerization & Orchestration: Develop and manage Docker containers and deploy applications using AWS ECS, EC2 or EKS. CI/CD Pipeline Development: Build and automate CI/CD pipelines using tools like Jenkins, Groovy, GitHub, etc. Disaster Recovery Planning: Implement and manage disaster recovery strategies for ECS workloads, ensuring business continuity and minimal downtime. Scripting & Automation: Write and maintain Groovy scripts for Jenkins pipelines and other automation tasks. Collaboration & Communication: Work closely with development, QA, and operations teams to ensure seamless deployment processes and resolve issues promptly. Monitoring & Optimization: Implement monitoring solutions using AWS CloudWatch and other tools to ensure system reliability and performance. Cloud Infrastructure Management: Design, implement, and maintain scalable AWS infrastructure using services like EC2, ECS, VPC, and CloudWatch. Desired Skills & Qualifications: Technical Skills: Proficiency in AWS services: ECS, EC2, EKS, S3, VPC, IAM, CloudWatch, Lambda, etc. Strong experience with Docker and container orchestration tools like Kubernetes. Hands-on experience with CI/CD tools: Jenkins, AWS ECS, EC2, etc. Familiarity with Infrastructure as Code tools: Terraform, AWS CloudFormation. Scripting expertise in Groovy, Python, Shell, or Bash, PowerShell Disaster Recovery Expertise: Experience in implementing ECS disaster recovery strategies, including backup and restore, failover mechanisms, and cross-region replication. Knowledge of AWS services like AWS Backup, Route 53, and CloudFormation for disaster recovery. Soft Skills: Excellent communication, co-ordination and collaboration abilities. Strong problem-solving skills and attention to detail. Ability to work in a fast-paced, agile environment. Educational Background: Bachelor’s degree in computer science, Information Technology, or a related field.

Posted 6 days ago

Apply

6.0 years

0 Lacs

India

On-site

We are hiring a DevOps Engineer to help us scale, secure, and streamline our infrastructure. This role demands deep operational understanding, hands-on automation, and a strong focus on reliability and performance in production environments. Key Responsibilities Build and manage CI/CD pipelines for seamless deployment across environments Automate infrastructure provisioning and configuration using tools like Terraform or Ansible Monitor system performance and availability using modern observability stacks (Prometheus, Grafana, ELK, etc.) Manage containerized applications using Docker and orchestrate them with Kubernetes Ensure security best practices across infrastructure and deployments Support development teams in debugging, logging, and scaling applications Handle incident management, root cause analysis, and system recovery Requirements 3–6 years of experience in DevOps, SRE, or Infrastructure roles Proficient with CI/CD tools (GitLab CI, Jenkins, GitHub Actions, etc.) Strong hands-on experience with cloud platforms (AWS, GCP, or Azure) Solid experience with Kubernetes, Docker, and container lifecycle management Infrastructure-as-Code (IaC) experience with Terraform, CloudFormation, or similar Good scripting skills (Bash, Python, or Go preferred) Knowledge of network fundamentals, system security, and Linux internals

Posted 6 days ago

Apply

16.0 years

0 Lacs

India

On-site

Job Summary: We are seeking an experienced Ansible Platform Architect with deep expertise in AWS infrastructure, automation, and cloud-native security practices. This role will drive the design, build, and rollout of a highly scalable and secure Ansible-based automation platform across a large multi-account AWS environment. You’ll also integrate AI/ML-driven automation for inventory and patch management and ensure the system is architected to support future cloud-agnostic deployments. Project Objective Building a scalable, secure, and automated Ansible platform for configuration and patch management across a large AWS environment. Environment spans 100+ AWS accounts with thousands of EC2s, Kubernetes, ECS, Fargate, and other services. Core Technologies: Ansible (central tool for configuration and security automation). AWS (EC2, RDS, VPC, IAM, Security Groups, etc.). AI/ML for inventory management, packaging, and automation. Terraform for infrastructure as code. GitHub + GitHub Actions for CI/CD. Use of RBAC (Role-Based Access Control), Okta, and multi-token security for access controls. Security tools for vulnerability management, with data ingested and remediated via Ansible. Security & Compliance: Remediating vulnerabilities within 30 days across all infra layers. Enforcing automated compliance and secure deployments. Cloud Strategy: Currently on AWS, with plans to support cloud-agnostic deployment (Azure, possibly GCP). Aim to build cloud-native, not cloud-provider-specific solutions. Team Structure & Expectations: Team of ~15 engineers with varied maturity in automation. Need an Ansible architect who can build, scale, and automate the platform using AI/ML to minimize manual operations. Infrastructure spans Linux and Windows, Kubernetes, ECS, Fargate, EC2, etc. Key Responsibilities: Architect, design, and implement a secure and scalable Ansible automation platform across 100+ AWS accounts. Integrate agentic AI/ML capabilities to automate inventory management, patch packaging, and remediation workflows. Collaborate with security teams to ingest vulnerability data and automate compliance using Ansible. Work closely with DevOps and Infra teams to manage EC2, Kubernetes, ECS, RDS, IAM, VPC, etc. Ensure access control using RBAC, Okta, and other identity management systems. Build CI/CD workflows using GitHub Actions and manage IaC with Terraform. Prepare platform for multi-cloud readiness with potential support for Azure or GCP. Job Requirements Must-Have: Deep expertise in Ansible platform architecture and scalability. Hands-on experience with AWS infrastructure (EC2, IAM, VPC, RDS, Security Groups). Strong understanding of cloud security, RBAC, and identity/access management tools (e.g., Okta). Proficiency in Terraform, CI/CD tools (especially GitHub Actions). Exposure to AI/ML, preferably with agentic AI experience in automation. Experience in enterprise-grade system design, fault tolerance, and compliance. Nice-to-Have: Familiarity with Azure or GCP cloud platforms. Experience with vulnerability management tools and automated remediation. Ability to design cloud-native, not provider-specific, systems. About TechBlocks TechBlocks is a global digital product engineering company with 16+ years of experience helping Fortune 500 enterprises and high-growth brands accelerate innovation, modernize technology, and drive digital transformation. From cloud solutions and data engineering to experience design and platform modernization, we help businesses solve complex challenges and unlock new growth opportunities. At TechBlocks, we believe technology is only as powerful as the people behind it. We foster a culture of collaboration, creativity, and continuous learning, where big ideas turn into real impact. Whether you're building seamless digital experiences, optimizing enterprise platforms, or tackling complex integrations, you'll be part of a dynamic, fast-moving team that values innovation and ownership.

Posted 6 days ago

Apply

15.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First. Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail – one of the world’s most competitive markets, with a deluge of multi-dimensional data – dunnhumby today enables businesses all over the world, across industries, to be Customer First. dunnhumby employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro. We are looking for a visionary Senior Engineering Manager – DevOps & Platform Engineering to lead the modernization and evolution of our cloud-native platform infrastructure. In this high-impact role, you will define the strategic direction, build a high-performing team, and deliver platform capabilities that improve developer productivity, system resilience, and cost efficiency. You’ll work at the intersection of DevOps, Developer Experience (DevEx), and Platform-as-a-Product, driving consistent standards, intelligent automation, and scalable self-service experiences across GCP and Azure. Key Responsibilities Strategy & Roadmap Leadership - Define and own the platform engineering strategy, roadmap, and technical direction aligned with business and product goals. Platform & DevOps Engineering - Lead the evolution of CI/CD pipelines with GitOps-first automation using ArgoCD, GitLab CI, Helm, Terraform. Leadership & People Management - Manage and mentor a globally distributed team of engineers and tech leads. Metrics, Governance, & Automation - Lead the creation of reusable templates, governance policies, and automation blueprints to scale platform adoption across squads. Stakeholder & Cross-Functional Collaboration - Serve as a key point of contact for Product, Security, Architecture, Data, and Delivery leadership from engineering perspective. What You Bring 15+ years of experience in engineering, with 7+ years in platform engineering/DevOps/SRE leadership roles. Proven success leading large-scale platform transformations in cloud-native environments (preferably GCP and Azure). Hands-on and strategic experience with Kubernetes, CI/CD, GitOps, Terraform, Crossplane, Docker, Infrastructure-as-Code, and multi-tenant platform design. Deep expertise in platform observability, developer self-service, golden paths, and IDPs such as Backstage. Advanced understanding of DevSecOps, compliance automation, and security by-design principles Strong communication, collaboration, and leadership skills across cross-functional teams Demonstrated success in defining and driving engineering OKRs, metrics-based decision making, and cost accountability. Strong ability to balance technical depth with cross-functional influence, managing senior stakeholders and C-level engagement. A builder’s mindset with a focus on automation, resilience, scalability, and simplification. Good to have: Exposure to LLMs/GenAI in engineering workflows and intelligent automation What You Can Expect From Us We won’t just meet your expectations. We’ll defy them. So you’ll enjoy the comprehensive rewards package you’d expect from a leading technology company. But also, a degree of personal flexibility you might not expect. Plus, thoughtful perks, like flexible working hours and your birthday off. You’ll also benefit from an investment in cutting-edge technology that reflects our global ambition. But with a nimble, small-business feel that gives you the freedom to play, experiment and learn. And we don’t just talk about diversity and inclusion. We live it every day – with thriving networks including dh Gender Equality Network, dh Proud, dh Family, dh One, dh Enabled and dh Thrive as the living proof. We want everyone to have the opportunity to shine and perform at your best throughout our recruitment process. Please let us know how we can make this process work best for you. Our approach to Flexible Working At dunnhumby, we value and respect difference and are committed to building an inclusive culture by creating an environment where you can balance a successful career with your commitments and interests outside of work. We believe that you will do your best at work if you have a work / life balance. Some roles lend themselves to flexible options more than others, so if this is important to you please raise this with your recruiter, as we are open to discussing agile working opportunities during the hiring process. For further information about how we collect and use your personal information please see our Privacy Notice which can be found (here)

Posted 6 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role Overview We are looking for a hands-on DevOps Engineer with a strong focus on CI/CD pipeline creation and optimization , expertise in Jenkins and GitLab CI , and experience with integrating Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication protocols. The role also involves deploying self-hosted Generative AI tools using Docker and Google Kubernetes Engine (GKE) and supporting scalable cloud infrastructure on AWS and GCP . Required Skills 4–6 years of hands-on DevOps and CI/CD experience Strong proficiency in: Jenkins pipelines (declarative and scripted) GitLab CI/CD pipeline creation and maintenance Experience building and integrating MCP and A2A protocols in enterprise environments Proficiency with Docker, container orchestration, and deployment automation Strong working experience with Google Kubernetes Engine (GKE) and Kubernetes resource management Familiarity with AWS and GCP services for hosting scalable applications Experience with monitoring/logging tools (e.g., Prometheus, Grafana, ELK, CloudWatch) Good To Have Experience deploying or managing Generative AI models or platforms (e.g., private GPT, RAG pipelines) Exposure to IaC tools like Terraform, Helm, or Ansible Understanding of DevSecOps principles, SAST/DAST integration, and policy automation Familiarity with service mesh (Istio/Linkerd) and API gateways What We Offer Work at the intersection of AI, DevOps, and cloud automation Hands-on exposure to self-hosted GenAI platforms and enterprise-scale CI/CD ecosystems A fast-paced, collaborative environment with opportunities for innovation and ownership Flexibility in work setup and access to continuous learning resources

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are looking for 8+ yrs. Experience in building Dev-ops frameworks for CI and CD process in required platform such as Azure devOps. Knowledge on containerization and virtualization. Hands-on experience on Docker and Kubernetes. Strong knowledge on Linux system internals with good understanding of network protocols. Excellent with shell commands. Utilized Kubernetes and docker for the runtime environment of the CI/CD system to build, test and deploy. Building/Maintaining Docker container clusters managed by Kubernetes Linux, Bash, GIT, Docker, on Azure cloud. Utilized Kubernetes and docker for the runtime environment of the CI/CD system to build, test deploy. Experience in using and maintaining configuration automation systems such as Terraform & Ansible. Experience in source code management tools and version control systems (Preferably Git). Ability to read and write complex Bash and PowerShell scripts to automate system deployment, troubleshooting, and maintenance. Experience in deploying the solutions, patches, updates, and fixes. Experience in develop and deploy automation scripts. Understanding on On-premises and cloud infrastructure. Working knowledge on Azure as a cloud service provider. Understanding of key Azure services and network components of Azure. Monitoring and administrating applications & services running on any infrastructure. Knowledge about observability concepts. Must understand Incident management and have Incident support experience. Must understand the Change Management process and implementation of change. Must be a learner and believe in upskilling and take initiatives. Good to have: Coding knowledge in C# and other object oriented languages. Additional Skills:  Hands-On Powershell expertise & other well-known scripting languages used in Infrastructure Automation.  Understanding of DevOps concepts & worked in SCRUM based working environment.  Active Directory Domain Services experience, PKI & Certificate management.  Demonstrable IT Infrastructure knowledge across a wide range of technology towers e.g. Windows Server 2019/2022, Windows 10 OS, Hyper-V, Azure, Storage, Network and experience working with incidents in a mission critical environment.  Windows and Linux Exposure and familiarity.  Sound working knowledge and experience of 2nd/3rd line, O/S, Infrastructure, Storage and technology platform support.  Knowledge of Azure HCI is good to have.  Server Hardware management/administration (Dell Blades, VRTX Chassis -preferable) and support.  Hands-on experience working with service management tools like service now etc. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide. Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves. As part of Product Engineering organization, you will be working on enhancing and building cutting edge tools and driving enhancements of cloud operations. We're seeking a seasoned senior engineer with a fervor for crafting high performance service solutions for enabling better cloud software enabled operations, continuous deployment to build cloud technologies of the future and build solution parity across platforms. This is a pivotal role that requires to be a self-starter backed by innovative problem-solving. As a creative self-starter, you'll play a pivotal role on this team. This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States. Key Responsibilities: Design, Implement and Automate large scale distributed systems Build tools and automation to help Five9 achieve better availability, scalability, latency and efficiency Work with Engineering teams to deliver high quality software in fast-paced environment Monitor production and development environments to build preventive measures to enable a fail-safe experience to our customers Document and communicate clearly to architect and implement solutions Work closely across teams and bridge the gap between product managers, architects, and implement reliable consistent solutions Work with delivery teams on software improvements to achieve higher availability and lower MTTD Expertise to debug and support production issues Requirements: 3+ years of professional software development life cycle / DevOps / production operations experience 2+ years of systems support, debugging and networking experience 2+ years of public cloud experience (GCP, AWS or Openstack) Excellent problem solving and analytical skills Development and infrastructure as code using one or more programming languages – Terraform, Python, other scripting languages Orchestration and container management know-how with Kubernetes, Helm, GKE and Docker Config management tools such as Ansible, Puppet or Chef Strong knowledge of Git and CI/CD tools, example: Gitlab Incident Management Process, SRE best practices and continuous improvements Ability to prioritize tasks, work independently and collaboratively within and across teams Strong verbal and written communications skills BS in Computer Science or equivalent experience Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer. View our privacy policy, including our privacy notice to California residents here: https://www.five9.com/pt-pt/legal. Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9.

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies