Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity. Show more Show less
Posted 1 day ago
8.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
India
Remote
Position: Fullstack Developer (AI+React) Experience: 8+ years Work Mode: Remote Shift timings: 8 am-5 pm Notice Period: Immediate Experience with AI/ML: 3+ years Tech Stack React, Next.js, TypeScript, FastAPI, Python, PostgreSQL, MongoDB, GPT-4, LangChain, Terraform, AWS Must-Haves Expert-level proficiency in React, TypeScript, and Next.js, including SSR and SSG. Strong backend experience using Python and FastAPI, with a focus on API design, database modeling (PostgreSQL, MongoDB), and secure authentication protocols (e.g., JWT, OAuth2). Hands-on experience with prompt engineering and deploying large language models (LLMs) such as GPT-4, LLaMA, or open-source equivalents. Familiarity with ML model serving frameworks (e.g., TorchServe, BentoML) and container orchestration tools (Docker, Kubernetes). In-depth knowledge of the AI/ML development lifecycle, from data preprocessing to model monitoring and retraining. Strong understanding of Agile/Scrum, including writing user stories, defining acceptance criteria, and conducting code reviews. Proven ability to translate financial domain requirements (accounting, bookkeeping, reporting) into scalable product features. Experience integrating with financial services APIs (e.g., accounting platforms, payment gateways, banking feeds) is a plus. Excellent written and verbal communication skills in English, with the ability to explain complex ideas to nontechnical audiences. Skills: next.js,api design,container orchestration tools,financial services api integration,agile,scrum,communication skills,postgresql,prompt engineering,terraform,mongodb,gpt-4,ai/ml,react,aws,ml,ml model serving frameworks,database modeling,typescript,python,fastapi,secure authentication techniques,langchain Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
Bachelors degree in Computer Science, Engineering, or related field. 10 plus years of professional experience in Java development. 3+ years of experience designing and developing solutions in AWS cloud environments. Strong expertise in Java 8+, Spring Boot, RESTful API design, and microservices architecture. Hands-on experience with key AWS services: Lambda, API Gateway, S3, RDS, DynamoDB, ECS, SNS/SQS, CloudWatch. Solid understanding of infrastructure-as-code (IaC) tools like Terraform, AWS CloudFormation, or CDK. experience with Agile/Scrum, version control (Git), and CI/CD pipelines. Strong communication and leadership skills, including leading distributed development teams. Lead end-to-end technical delivery of cloud-native applications built on Java and AWS. Design and architect secure, scalable, and resilient systems using microservices and serverless patterns. Guide the team in implementing solutions using Java (Spring Boot, REST APIs) and AWS services (e.g., Lambda, API Gateway, DynamoDB, S3, ECS, RDS, SNS/SQS). Participate in code reviews, ensure high code quality, and enforce clean architecture and design principles. Collaborate with DevOps engineers to define CI/CD pipelines using tools such as Jenkins, GitLab, or AWS CodePipeline. Mentor and coach developers on both technical skills and Agile best practices. Translate business and technical requirements into system designs and implementation plans. Ensure performance tuning, scalability, monitoring, and observability of deployed services. Stay current with new AWS offerings and Java development trends to drive innovation. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking an experienced DevOps Engineer to join our team. In this role, you will be responsible for designing, implementing, and maintaining secure cloud infrastructure using cloud-based technologies, including Oracle and Microsoft platforms. You will build and support scalable and reliable application systems and automate deployments. Additionally, you will integrate various systems and technologies using REST APIs and automate the software development and deployment lifecycle. Leveraging automation and monitoring tools, along with AI-powered solutions, you will ensure the smooth operation of our cloud-based systems. Key Areas of Responsibility Implement automation to control and orchestrate cloud workloads, managing the build and deployment cycles for each deployed solution via CI/CD. Utilize a wide variety of cloud-based services, including containers, App Services, API , and SaaS-oriented integration. GitHub and CI/CD tools (e.g., Jenkins, GitHub Actions, Maven/ANT). Create and maintain build and deployment configurations using Helm and Yaml. Manage the software change control process, including Quality Control and SCM audits, enforcing adherence to all change control and code management processes. Continuously manage and maintain releases, clear understanding of release management process Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud-based solutions. Problem-solving, teamwork, and communication to emphasize the collaborative nature of the role. Perform builds and environment configurations. Required Skills and Experience 5+ years of overall experience, Expertise in automating the software development and deployment lifecycle using Jenkins, Github Actions, SAST, DAST, Compliances, and Oracle ERP DevOps tools. Proficient with Unix Shell Scripting, SQL*Plus, PL/SQL, and Oracle database objects. Understanding of branching models is important. Experience in creating cloud resources using automation tools. Strong hands-on experience with Terraform and Azure Infrastructure as Code (IaC). Hands-on experience in GitOps, Flux CD/Argo CD, Jenkins, Groovy. Building and deploying Java and .NET applications, Liquibase database deployments. Proficient with Azure cloud concepts, creating Azure Container Apps, Kubernetes, Load balancers, Az CLI, Kubectl, Observability, APM, App Performance reivews. Azure AZ-104 or AZ-400 Certification is a plus Show more Show less
Posted 1 day ago
1.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Site Reliability Engineering (SRE) Intern – Azure Focus Location : Viman Nagar, Pune Duration : 1 Year (Internship with potential for extension or full-time opportunity) About the Role : We are looking for enthusiastic and motivated SRE Interns/Freshers with a keen interest in Cloud Computing, DevOps practices, and Azure platform. This internship offers hands-on experience in cloud infrastructure, automation, and system reliability engineering, giving you exposure to real-world environments and tools used in production systems. Key Responsibilities : • Support setup and management of Azure cloud resources (VMs, storage, networking) • Assist in monitoring and troubleshooting infrastructure and application issues • Work with version control systems like Git and participate in CI/CD processes • Contribute to automation scripts and infrastructure as code using tools like Terraform or Bicep • Collaborate with mentors and team members to understand system reliability and operational practices • Document procedures, issues, and solutions clearly Ideal Candidate Profile : ✅ Technical Skills : • Understanding of operating systems (Linux or Windows) • Basic networking knowledge (DNS, HTTP, TCP/IP) • Familiarity with cloud computing concepts (IaaS, PaaS, SaaS) • Exposure to Azure services like Virtual Machines, Resource Groups, Azure Storage, etc. • Basic command-line and scripting experience (Bash or PowerShell) • Familiarity with Git and basic CI/CD concepts ✅ Preferred (Nice to Have) : • Exposure to Azure CLI, Azure DevOps pipelines, or ARM templates • Basic knowledge of monitoring tools like Azure Monitor or Log Analytics • Hands-on experience with infrastructure-as-code tools (Terraform, Bicep) ✅ Soft Skills : • Strong problem-solving and analytical skills • Effective written and verbal communication • Willingness to learn, take initiative, and adapt in a fast-paced environment • Team-oriented with a collaborative mindset Educational Qualification : • Recently completed a degree in Computer Science, Information Technology, or a related field • Certifications like Microsoft Azure Fundamentals (AZ-900) are a plus What You’ll Gain : • Real-world experience working with cloud infrastructure and DevOps tools • Exposure to Site Reliability Engineering practices • Mentorship from experienced professionals • Opportunity to work on a capstone project • A potential pathway to full-time employment based on performance Show more Show less
Posted 1 day ago
89.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Full-time Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 1 day ago
5.0 - 10.0 years
20 - 27 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
We're Hiring | Platform Engineer @ Xebia Locations: Bangalore | Bhopal | Chennai | Gurgaon | Hyderabad | Jaipur | Pune Immediate Joiners (015 Days Notice Period Only) Valid Passport is Mandatory Xebia is on the lookout for passionate Platform Engineers with a strong mix of Azure Infrastructure as Code (IaC) Terraform and Data Engineering expertise to join our Cloud Data Platform team. What You'll Do: Design & deploy scalable Azure infrastructure using Terraform Build & optimize ETL/ELT pipelines using Azure Data Factory, Databricks, Event Hubs Automate infra provisioning, enforce security/governance via IaC Support CI/CD workflows with Git , Azure DevOps Work with VNETs, Key Vaults, Storage Accounts, Monitoring Tools Use Python, SQL, Spark for data transformation & processing What Were Looking For: Hands-on experience in Azure IaC + Data Engineering Strong in scripting, automation, & monitoring Familiarity with real-time & batch processing Azure certifications (Data Engineer / DevOps) are a plus Must have a valid passport Interested? Send your CV along with the following details to: vijay.s@xebia.com Required Details: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Location Notice Period / Last Working Day (if serving notice) Primary Skill Set LinkedIn Profile URL Do you have a valid passport? (Yes/No) Please apply only if you haven't applied recently or aren't already in the process with any open Xebia roles. Let’s build the future of cloud-native data platforms together!
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AiSensy AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp. Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more High Impact as Businesses drive 25-80% Revenues using AiSensy Platform Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins . Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management: Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security: Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting: Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation: Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
A Snapshot of Your Day As a Senior DevOps Engineer in our Industrial IoT team, you'll be at the intersection of cloud innovation and operational technology. Your day involves architecting and maintaining the AWS infrastructure that powers our IoT edge devices deployed across manufacturing facilities worldwide. You'll develop and optimize complex CI/CD pipelines that enable seamless deployment from cloud to edge, troubleshoot complex infrastructure issues, and collaborate with cross-functional teams to enhance our Industrial IoT capabilities. Your expertise will ensure our Linux-based edge devices receive secure updates, maintain reliable connections to the AWS cloud, and operate efficiently in industrial environments. We offer flexible work hours, and the choice of working from office or home. Join us on this exciting journey and play an important role in defining the future of Industrial IoT at Siemens Energy. What You Bring Engineering graduate with 5+ years of DevOps experience Python skills with object-oriented programming concepts Experience with AWS core services (IAM, Lambda, S3, API, Systems Manager, etc.) Strong Infrastructure as code experience (CloudFormation, CDK, or Terraform required) Experience building complex CI/CD pipelines and Git version control proficiency Linux skills including system administration, hardening, and shell scripting Experience in technical support, debugging, and release management DevOps culture and agile/Scrum methodology knowledge Must provide own code samples via GitLab / GitHub Industrial IoT and manufacturing exposure a plus Fluency in English How You’ll Make An Impact Develop and optimize complex GitLab CI/CD pipelines Design and maintain AWS infrastructure using CloudFormation/CDK Collaborate with internal and external development teams Lead troubleshooting for complex infrastructure issues Ensure compliance with security standards and best practices Support internal developers and manage production releases Monitor and improve security and performance Organization of production releases About The Team You will be part of a dedicated team focused on Industrial IoT for Siemens Energy internal operations. This team builds and manages the AWS cloud infrastructure powering our global OT services and edge devices. We operate in an agile environment, balancing technical innovation with the practical demands of industrial systems. Our work directly impacts manufacturing efficiency and operational reliability across Siemens Energy facilities worldwide. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. We meet the growing energy demand across 90+ countries while ensuring our climate is protected. With ~100,000 dedicated employees, we not only generate electricity for over 16% of the global community, but we’re also using our technology to help protect people and the environment. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Find out how you can make a difference at Siemens Energy: https://www.siemens-energy.com/employeevideo Our Commitment to Diversity Lucky for us, we are not all the same. Through diversity, we generate power. We run on inclusion and our combined creative energy is fueled by over 130 nationalities. Siemens Energy celebrates character – no matter what ethnic background, gender, age, religion, identity, or disability. We energize society, all of society, and we do not discriminate based on our differences. Rewards/Benefits All employees are automatically covered under the Medical Insurance. Company paid considerable Family floater cover covering employee, spouse and 2 dependent children up to 25 years of age. Siemens Energy provides an option to opt for Meal Card to all its employees which will be as per the terms and conditions prescribed in the company policy. – As a part of CTC, tax saving measure Flexi Pay empowers employees with the choice to customize the amount in some of the salary components within a defined range thereby optimizing the tax benefits. Accordingly, each employee is empowered to decide on the best Possible net income out of the same fixed individual base pay on a monthly basis. https://jobs.siemens-energy.com/jobs Show more Show less
Posted 1 day ago
5.0 - 8.0 years
15 - 27 Lacs
Hyderabad
Hybrid
Warm Greetings from SP Staffing!! Role: Azure Devops Experience Required :5 to 8 yrs Work Location :Hyderabad Required Skills, Azure Devops Terraform Bash/Powershell/Python Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 day ago
5.0 - 10.0 years
15 - 22 Lacs
Pune, Chennai, Bengaluru
Hybrid
5-8 years of experience in backend development with a strong focus on Python. Develop and maintain serverless applications using AWS Lambda functions, DynamoDB, and other AWS services. Hands-on experience with Terraform
Posted 1 day ago
6.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
TCS Hiring for Azure Cloud Engineer_PAN India Experience: 6 to 12 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Engineer_PAN India Required Technical Skill Set: Azure Cloud Engineer with Azure VMs, Blob Storage, Azure SQL, Azure Functions, AKS, etc. Desired Competencies (Technical/Behavioral Competency): Must-Have • Design and deploy scalable, highly available, and fault-tolerant systems on Azure. Proven experience with Microsoft Azure services (Compute, Storage, Networking, Security). • Strong understanding of networking concepts (DNS, VPN, VNet, NSG, Load Balancers). • Manage and monitor cloud infrastructure using Azure Monitor, Log Analytics, and other tools. • Implement and manage virtual networks, storage accounts, and Azure Active Directory. • Hands-on experience with Infrastructure as Code (IaC) tools like ARM, Terraform. Experience with scripting languages (PowerShell, Bash, or Python). • Ensure security best practices and compliance standards are followed. • Troubleshoot and resolve issues related to cloud infrastructure and services. • Experience in DevOps to support CI/CD pipelines and containerized applications (AKS, Docker). • Optimize cloud costs and performance. • Familiarity with Azure DevOps, GitHub Actions, or other CI/CD tools. • Experience in identity and access management (IAM), RBAC, and Azure AD. Good-to-Have • Basic knowledge in Redhat Linux and Windows Operating Systems • Good at Console and the Azure CLI and APIs • Experience in migration using Azure migration tools. • Hands-on experience on DevOps tools like Jenkins, GIT will be added advantage. Role descriptions / Expectations from the Role 1 Ability to understand and articulate the different functions within AZURE and implement appropriate solution, HLD, LLD around it. 2 Ability to identify and gather requirements to define a solution to be built and operated on AZURE, perform high level and low-level design for the AZURE platform. 3 Capabilities to provide AZURE operations and deployment guidance and best practices throughout the lifecycle of a project. 4 Understanding the significance of the different metrics under the monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds. 5 Knowledge of automation to reduce the number of incidents or repetitive incidents are preferred 6 AZURE Engineer will be responsible for provisioning the services as per the design. Kind Regards, Priyankha M Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. Numerator provides unparalleled consumer insights at a massive scale. Our technology harnesses data through the application of gamified mobile apps and sophisticated web crawling technology to deliver an unmatched view of consumer shopping and purchase experience. Numerator is looking for a passionate Senior Engineering to join our Receipt Processing Team. As part of our Receipt Processing Team tools team, you will be responsible for our receipt transcription system that has processed over a billion receipts and adds millions every week. This is a high growth and impactful role that will give you tons of opportunity to drive decisions for projects from inception through production. If you are seeking an environment where you get to do meaningful work with other great engineers, then we want to hear from you! What You'll Bring to Numerator What You’ll get to do Help to create the design, architecture, and execution of everything from backend APIs to data processing and databases. Make decisions about code design, architecture, and refactoring to balance technical debt against delivering functionality. Work with stakeholders to identify project risks and recommend mitigating solutions. Collaborate with our cross-functional team to build powerful and easy-to-use products. Architectural designs and decisions, to improve the availability of the system Maintaining the system in general, on-call bug-fixing for mission critical issues Example Projects Build out and expand the framework for the rules engine transcription of our receipts data to in leverage the inherent structure and spacing of the tabular data in a receipt. Build out a data QA process to approve the output of both our machine learning algorithms, and our hundreds of data associates attributing products. Refactor our backend to optimize for scale as the number or receipts we need to process continues to grow. Our Tech Stack Web: HTML, Javascript, CSS, Angular. Backend: Python, Django, Aurora Mysql, Redis. Distributed Computing: Celery, Airflow, Azkaban, RabbitMQ Data Warehouse: Snowflake Infrastructure: AWS EC2, Kubernetes, Docker, Helm, Terraform Requirements Have 8+ years of experience in a backend role. Programming experience in Python, or another object-oriented language. An eagerness to learn new things, and improve upon existing skills, abilities, and practices. Familiarity with web technology, such as HTTP, JSON, HTML, and JavaScript UIs. Experience with databases, SQL or NoSQL. Knowledge in an Agile software development environment, Experience with version control systems (Git, Subversion, etc..). Have a real passion for clean code and finding elegant solutions to problems. Knowledge and abilities in python and cloud-based technologies. Motivation to participate in ongoing learning and growth through pair programming, test-driven development, code reviews, and application of new technologies and best practices. You look ahead to identify opportunities and foster a culture of innovation. B.E/B.Tech in Computer Science or a related field, or equivalent work experience. Knowledge of Kubernetes and Docker development Nice to haves Previous experience leading an engineering team. Experience in UI frameworks React, Angular. Experience with REST services and API design. Knowledge of TCP/IP sockets Programming experience on Unix based infrastructure. Knowledge of cloud-based systems (EC2, Rackspace, etc..). Expertise with big data, analytics, and personalization. Start-up or CPG industry experience. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description CS Optima offers cloud-native solutions and partners with customer teams to build and operate applications efficiently in the cloud. We focus on the areas like new cloud-based solution development, application modernization, and building scalable platforms with AWS serverless, AI/ML, Gen AI, etc. Role Description This is a full-time on-site role located in Chennai for a Python Sr Programmer with AWS Cloud Experience. The Sr Programmer will be responsible for back-end web development, software development, programming, and object-oriented programming (OOP). Qualifications Python based API development experience Total years of experience should be 4-8 yrs Programming and Object-Oriented Programming (OOP) skills 1- 2 yrs of experience with AWS services like Terraform, Lambda function, AppSync, DynamoDB Excellent problem-solving abilities Strong communication and collaboration skills Experience in working enterprise level application development Bachelor's degree in Computer Science or related field Agile development experience Show more Show less
Posted 1 day ago
8.0 - 15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less
Posted 1 day ago
3.0 - 4.0 years
0 Lacs
Surat, Gujarat, India
On-site
Job Title - DevOps Engineer Location - Surat (On-site ) Experience - 3-4 years Job Summary: We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely. Roles & Responsibilities: Strong experience with essential DevOps tools and technologies including Kubernetes , Terraform , Azure DevOps , Jenkins , Maven , Git , GitHub , and Docker . Hands-on experience in Azure cloud services , including: Virtual Machines (VMs) Blob Storage Virtual Network (VNet) Load Balancer & Application Gateway Azure Resource Manager (ARM) Azure Key Vault Azure Functions Azure Kubernetes Service (AKS) Azure Monitor, Log Analytics, and Application Insights Azure Container Registry (ACR) and Azure Container Instances (ACI) Azure Active Directory (AAD) and RBAC Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers. Build and maintain CI/CD pipelines using Azure DevOps , Jenkins , and scripting for scalable SaaS deployments. Develop automation and infrastructure-as-code (IaC) using Terraform , ARM Templates , or Bicep for managing and provisioning cloud resources. Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS). Proficient in setting up monitoring , logging , and alerting systems using Azure-native tools and integrating with third-party observability stacks. Experience implementing auto-scaling , load balancing , and high-availability strategies for cloud-native SaaS applications. Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing , compliance , and secure deployments . Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments. Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation. Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications. Skilled in GitHub repository management , including setting up project-specific access, enforcing code quality standards, and managing pull requests. Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications. Ability to design and maintain scalable , resilient , and secure infrastructure to support rapid growth of SaaS applications. Qualifications & Requirements: Proven experience as a DevOps Engineer , Site Reliability Engineer , or in a similar software engineering role. Strong experience working in SaaS environments with a focus on scalability, availability , and performance . Proficiency in Python or Ruby for scripting and automation. Working knowledge of SQL and database management tools. Strong analytical and problem-solving skills with a collaborative and proactive mindset. Familiarity with Agile methodologies and ability to work in cross-functional teams . Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role We're seeking an experienced Infrastructure Engineer to join our platform team, handling massive-scale data processing and analytics infrastructure that supports over 5B+ events and more 5M+ DAU .We’re looking for someone who can help us scale gracefully while optimizing for performance, cost, and resiliency. Key Responsibilities Design, implement, and manage our AWS infrastructure, with a strong emphasis on automation, resiliency, and cost-efficiency. Develop and oversee scalable data pipelines (for event processing, transformation, and delivery). Implement and manage stream processing frameworks (such as Kinesis, Kafka, or MSK). Handle orchestration and ETL workloads, employing services like AWS Glue, Athena, Databricket, Redshift, or Apache Airflow. Implement robust network, storage, and backup strategies for growing workloads. Monitor, debug, and resolve production issues related to data and infrastructure in real time. Implement IAM controls, logging, alerts, and Security Best Practices across all components. Provide deployment automation (Docker, Terraform, CloudFormation) and collaborate with application engineers to enable smooth delivery. Build SOP for support and setup a functioning 24*7 support system (including hiring right engineers) to ensure system uptime and availability Required Technical Skills 5+ years of experience with AWS services (VPC, EC2, S3, Security Groups, RDS, Kinesis, MSK, Redshift, Glue). Experience designing and managing large-scale data pipelines with high-throughput workloads. Ability to handle 5 billion events/day and 1M+ concurrent users’ workloads gracefully. Familiar with scripting (Python, Terraform) and automation practices (Infrastructure as Code). Familiar with network fundamentals, Linux, scaling strategies, and backup routines. Collaborative team player — able to work with engineers, data analysts, and stakeholders. Preferred Tools & Technologies AWS: EC2, S3, VPC, Security Groups, RDS, Redshift, DocumentDB, MSK, Glue, Athena, CloudWatch Infrastructure as Code: Terraform, CloudFormation Scripted automation: Python, Bash Container orchestration: Docker, ECS or EKS Workflow orchestration: Apache Airflow, Dagster Streaming framework: Apache Kafka, Kinesis, Flink Other: Linux, Git, Security best practices (IAM, Security Groups, ACM) Education Bachelor's/Master's degree in Computer Science, Data Science, or related field Relevant professional certifications in cloud platforms or data technologies Why Join Us? Opportunity to work in a fast-growing audio and content platform. Exposure to multi-language marketing and global user base strategies. A collaborative work environment with a data-driven and innovative approach. Competitive salary and growth opportunities in marketing and growth strategy. Success Metrics ✅ Scalability: Ability to handle 1+ billion events/day with low latency and high resiliency. ✅ Cost-efficiency: Reduction in AWS operational costs by optimizing services, storage, and data transfer. ✅ Uptime/SLI: Achieve 99.9999% platform and pipeline uptimes with automated fallback mechanisms. ✅ Data delivery latency: Reduce event delivery latency to under 5 minutes for real-time processing. ✅ Security and compliance: Implement controls to pass PCI-DSS or SOC 2 audits with zero major findings. ✅ Developer productivity: Improve team delivery speed by self-service IaC modules and automated routines. About KUKU Founded in 2018, KUKU is India’s leading storytelling platform, offering a vast digital library of audio stories, short courses, and microdramas. KUKU aims to be India’s largest cultural exporter of stories, culture and history to the world with a firm belief in “Create In India, Create For The World”. We deliver immersive entertainment and education through our OTT platforms: Kuku FM, Guru, Kuku TV, and more. With a mission to provide high-quality, personalized stories across genres from entertainment across multiple formats and languages, KUKU continues to push boundaries and redefine India’s entertainment industry. 🌐 Website: www.kukufm.com 📱 Android App: Google Play 📱 iOS App: App Store 🔗 LinkedIn: KUKU 📢 Ready to make an impact? Apply now Skills: aws services,bash,networking,kafka,data pipeline,docker,kinesis,data pipelines,etl,terraform,automation,aws,security,ec2,cloudformation,cloud,scripting,linux,infrastructure,amazon redshift,python,vpc,network fundamentals,workflow orchestration,stream processing frameworks,container orchestration,dagster,airflow,s3 Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Gautam Buddha Nagar, Uttar Pradesh, India
On-site
We are seeking a dynamic and experienced Technical Trainer to join our engineering department. The ideal candidate will be responsible for designing and delivering technical training sessions to B.Tech students across various domains, ensuring they are industry-ready and equipped with practical, job-oriented skills. Role & Responsibility To train the students in new age technology (computer Science Engineering) to bridge the industry & academia gap leading to increase in the employability of the students. Knowledge Proven experience in devising technical training programs to UG/PG Engineering students in Higher Education Institutions To be abreast in latest software as per Industry standard & having knowledge of modern training techniques and tools to deliver the technical subjects To prepare training material (presentations, worksheets etc.) To execute training sessions, webinars, workshops for students To determine overall effectiveness of programs and make improvements Technical Skills (Subject Areas of delivering Training with Practical Approach) 1. Core Programming Skills Languages: C, Python, Java, C++, JavaScript 2. Web Development Frontend: HTML, CSS, JavaScript, React.js/Next.js Backend: Node.js, Express, Django, or Spring Boot Full-Stack: MERN stack (MongoDB, Express, React, Node.js) 3. Data Science & Machine Learning Languages: Python (NumPy, pandas, scikit-learn, TensorFlow/PyTorch) Tools: Jupyter Notebook, Google Colab, MLFlow 4. AI & Generative AI LLMs (Large Language Models): Understand how GPT, BERT, Llama models work Prompt Engineering Fine-tuning & RAG (Retrieval-Augmented Generation) Hugging Face Transformers, LangChain, OpenAI APIs 5. Cloud Computing & DevOps Cloud Platforms: AWS, Microsoft Azure, Google Cloud Platform (GCP) DevOps Tools: Docker, Kubernetes, GitHub Actions, Jenkins, Terraform CI/CD Pipelines: Automated testing and deployment 6. Cybersecurity Basics: OWASP Top 10, Network Security, Encryption, Firewalls Tools: Wireshark, Metasploit, Burp Suite 7. Mobile App Development Native: Kotlin (Android), Swift (iOS) Cross-platform: Flutter, React Native 8. Blockchain & Web3 Technologies: Ethereum, Solidity, Smart Contracts Frameworks: Hardhat, Truffle 9. Database & Big Data Databases: SQL (MySQL, PostgreSQL), NoSQL (MongoDB, Redis) Big Data Tools: Apache Hadoop, Spark, Kafka Qualification & Years of Experience as per norms: B.Tech./MCA/M.Tech (IT/CSE) from Top tier Institutes & reputed universities Industry Experience is desirable. Candidate must have minimum 2 years of training experience in the same domain. Show more Show less
Posted 1 day ago
5.0 - 8.0 years
15 - 25 Lacs
Hyderabad
Work from Office
Warm Greetings from SP Staffing Services Pvt Ltd!!!! Experience:5-8yrs Work Location :Hyderabad Interested candidates, Kindly share your updated resume to ramya.r@spstaffing.in or contact number 8667784354 (Whatsapp:9597467601) to proceed further
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary: As part of the Cloud network team in Thomson Reuters you will work on delivering world class infrastructure services to our customers using latest technologies. We are looking for Senior Network Cloud Engineer who can help us design and implement secure, scalable, highly available network architectures in AWS, Azure, OCI & GCP. You will be working in agile teams and will get opportunity to learn new technologies and tools. About the Role: In this role as a Senior Network Cloud Engineer, you will: Work closely with Architecture and business teams to understand their requirements and translate them into robust, reliable and highly available network designs. Collaborate with security team to ensure compliance with security policies and best practices. Design, provision and configure networks in all cloud providers. Implement automation solutions to reduce manual intervention and increase efficiency. Participate in on call support activities and perform post implementation reviews to identify any issues or room for improvement. Stay up to date with the latest trends and advancements in cloud computing and related technologies. Maintain documentation of system designs, configurations and procedures. Contribute to knowledge base articles and technical guides. Actively participate in code reviews, sprint ceremonies and other Agile/Scrum activities. About You: You're a fit for the role of Senior Network Cloud Engineer if your background includes: Bachelor’s degree in computer science, information technology or related field. Master’s degree preferred but not required. At least 5 years of experience in designing, implementing and managing large scale network architectures in public clouds (AWS, Azure, Google). Strong understanding of network protocols such as TCP/IP, DNS, HTTP, SSL etc. Experience with configuration management tools such as Terraform, Ansible, Chef, Puppet etc. Excellent scripting skills using Python, PowerShell, Bash etc. Proficiency in at least one object-oriented programming language like Java, C#, Python etc. Familiarity with automated testing frameworks such as Junit, NUnit, Pytest etc. Practical experience writing unit tests and integration tests. Understanding of continuous integration and continuous deployment pipelines. Knowledge of version control systems such as Git. Ability to communicate effectively both verbally and written. Team player mentality with ability to collaborate across multiple disciplines. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer III Full-time McDonald's Office Location: Hyderabad Global Grade: G4 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.