Home
Jobs

40 Cloudops Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

10 - 15 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Must have a minimum 1 yr exp in SRE (CloudOps), Google Cloud platforms (GCP), monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty, Hands-on experience with Kubernetes for orchestration and container mgt Required Candidate profile Mandatory expreience working in B2C Product Companies. Must have Experience with CI/CD tools e.g. (Jenkins, GitLab CI/CD, CircleCI TravisCI..)

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Primary Responsibilities Promote and evangelize Infrastructure-as-code (IaC) design thinking every day Work with configuration management tools such as Ansible or Chef, and with automation tools such as Jenkins/Team city Build comprehensive understanding, operate & monitor our hybrid cloud system components including our API gateway, high perf cache, high perf messaging, data services, etc. Collaborate with Platform & DevOps Agile teams, supporting deliverables Qualifications: 5-8 years of relevant experience Expertise in setting up IIS, Managing Windows/Web Services Expertise in installing releases Experience in certificate management Experience with portable provisioning technologies and infrastructure as codeTerraform, Packer, Ansible or SaltStack and Hyper-V Skill Required: Production support Set up IIS ,Windows/Web Services Deploy Web/Windows services Execute SQL Script, Updating Schemas Installing releases

Posted 1 week ago

Apply

2.0 - 7.0 years

3 - 7 Lacs

Ahmedabad

Work from Office

Naukri logo

To help us build functional systems that improve customer experience we are now looking for an experienced DevOps Engineer. They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with Ruby or Python, wed love to speak with you. Responsibilities Work with development teams to ideate software solutions Building and setting up new development tools and infrastructure Working on ways to automate and improve development and release processes Ensuring that systems are safe and secure against cybersecurity threats Deploy updates and fixes Perform root cause analysis for production errors Develop scripts to automate infrastructure provision Working with software developers and software engineers to ensure that development follows established processes and works as intended Technologies we use GitOps GitHub, GitLab, BitBucket CI/CD Jenkins, Circle CI, Travis CI, TeamCity, Azure DevOps Containerization Docker, Swarm, Kubernetes Provisioning Terraform CloudOps Azure, AWS, GCP Observability Prometheus, Grafana, GrayLog, ELK Qualifications Graduate / Postgraduate in Technology sector Proven experience as a DevOps Engineer or similar role Effective communication and teamwork skills

Posted 1 week ago

Apply

10.0 - 16.0 years

25 - 35 Lacs

Pune

Work from Office

Naukri logo

As a Cloud Security Engineer, youll help us with: Day to day operation of the security infrastructure supporting the Abacus Insights platform and information systems in both AWS and Azure Enabling engineering teams through security reviews and audits to ensure security is at the heart of all features or solutions being built into our platform Triaging or investigating security alerts Resolving escalated access management requests and building least-privilege roles Building access packages for employees and automating provisioning Improving and tuning Splunk dashboards, alerts, and reports Evaluating cloud security postures of accounts, subscriptions, and infrastructure in AWS and Azure Runtime security tool monitoring, application/component integration, and alert tuning Vulnerability management artifact curation and remediation (OS, code/library/dependency) SAST result triage including remediation through DevOps practices or alongside developers and engineers Candidate must have: A concrete understanding of application security, cloud security, network security, and host/OS security Hands-on experience securing enterprise workloads in AWS or Azure, ideally for a multi-tenant SaaS platform Familiarity with modern authentication protocols (SAML2, OAUTH, OIDC, mTLS) Familiarity with basic programming concepts and ability to demonstrate capabilities in at least 1 language (ideally Python) Unix systems administration experience Bonus Points: Current cloud security certification(s) Current cloud architecture or DevOps certification(s) Experience securing serverless and containerized workloads Experience deploying and supporting assets with Infrastructure as Code (IaC) methodologies (Terraform, CloudFormation, Azure Resource Manager Templates) Programming/developer background or experience driving security to developers and integrating security tools into CI/CD pipelines. Familiarity with securing PHI/PII or PCI data and systems. Experience operating in a controlled environment (HITRUST, FedRamp, PCI). Jira administration/service project workflow administration Red-team experience CCDC experience CTF experience Incident response or SOC experience Knowledge of DevOps methodologies and Agile practices

Posted 1 week ago

Apply

2.0 - 5.0 years

1 - 4 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN ? Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Let’s do this. Let’s change the world. ? We are looking for a CloudOps Engineer (SRE 1 ) to work on the performance optimization, standardization, and automation of Amgen’s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities System Reliability, Performance Optimization & Cost Reduction Ensure the reliability, scalability, and performance of Amgen’s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code ( IaC ) Drive the adoption of automation and Infrastructure as Code ( IaC ) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization. Standardization of Processes & Tools Establish standardized operational processes, tools, and frameworks across Amgen’s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery E xecute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experience with AWS C loud S ervices Experience in CI/CD, IAC , Observability, Gitops (added advantage) etc Exposure to containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability is an added advantage Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications Bachelor’s /Masters degree in Computer Science , Engineering, or related field. 1 - 4 years of experience in IT infrastructure, with at least 1 year in Site Reliability Engineerin g or related fields. EQUAL OPPORTUNITY STATEMENT ? Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN ? Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Let’s do this. Let’s change the world. ? We are looking for a CloudOps Engineer (SRE 1 ) to work on the performance optimization, standardization, and automation of Amgen’s critical infrastructure and systems. This role is crucial to ensuring the reliability, scalability, and cost-effectiveness of our production systems. The ideal candidate will work on operational excellence through automation, incident response, and proactive performance tuning, while also reducing infrastructure costs. You will work closely with cross-functional teams to establish best practices for service availability, efficiency, and cost control. Roles & Responsibilities System Reliability, Performance Optimization & Cost Reduction Ensure the reliability, scalability, and performance of Amgen’s infrastructure, platforms, and applications. Proactively identify and resolve performance bottlenecks and implement long-term fixes. Continuously evaluate system design and usage to identify opportunities for cost optimization, ensuring infrastructure efficiency without compromising reliability. Automation & Infrastructure as Code ( IaC ) Drive the adoption of automation and Infrastructure as Code ( IaC ) across the organization to streamline operations, minimize manual interventions, and enhance scalability. Implement tools and frameworks (such as Terraform, Ansible, or Kubernetes) that increase efficiency and reduce infrastructure costs through optimized resource utilization . Standardization of Processes & Tools Establish standardized operational processes, tools, and frameworks across Amgen’s technology stack to ensure consistency, maintainability, and best-in-class reliability practices. Champion the use of industry standards to optimize performance and increase operational efficiency. Monitoring, Incident Management & Continuous Improvement Implement and maintain comprehensive monitoring, alerting, and logging systems to detect issues early and ensure rapid incident response. Lead the incident management process to minimize downtime, conduct root cause analysis, and implement preventive measures to avoid future occurrences. Foster a culture of continuous improvement by leveraging data from incidents and performance monitoring. Collaboration & Cross-Functional Leadership Partner with software engineering, and IT teams to integrate reliability, performance optimization, and cost-saving strategies throughout the development lifecycle. Act as a SME for SRE principles and advocate for best practices for assigned Projects. Capacity Planning & Disaster Recovery E xecute capacity planning processes to support future growth, performance, and cost management. Maintain disaster recovery strategies to ensure system reliability and minimize downtime in the event of failures. Must-Have Skills: Experience with AWS C loud S ervices Experience in CI/CD, IAC , Observability, Gitops (added advantage) etc Exposure to containerization (Docker) and orchestration tools (Kubernetes) to optimize resource usage and improve scalability is an added advantage Ability to learn new technologies quickly. Strong problem-solving and analytical skills. Excellent communication and teamwork skills. Good-to-Have Skills: Knowledge of cloud-native technologies and strategies for cost optimization in multi-cloud environments. Familiarity with distributed systems, databases, and large-scale system architectures. Databricks Knowledge/Exposure is good to have (need to upskill if hired) Soft Skills: Ability to foster a collaborative and innovative work environment. Strong problem-solving abilities and attention to detail. High degree of initiative and self-motivation. Basic Qualifications Bachelor’s / Masters degree in Computer Science , Engineering, or related field. 2- 4 years of experience in IT infrastructure, with at least 1 year in Site Reliability Engineerin g or related fields. EQUAL OPPORTUNITY STATEMENT ? Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation .

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 1 week ago

Apply

4.0 - 8.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities Lead the strategic planning, development, and launch of Azure integration capabilities within our flagship product, ensuring a seamless and efficient integration process. Conduct market research and engage with customers to understand their needs and challenges related to Azure cloud services. Collaborate with engineering, design, and sales teams to define product requirements, roadmaps, and go-to-market strategies for Azure Launch. Deliver actionable product feature requirements and stories with wireframes (hands-on work is required here) Drive feature delivery to customers and conduct feedback loop based improvements. Stay abreast of Azure services and cloud industry trends to ensure our product remains competitive and innovative. Required Experience and Skills Bachelor's degree in Computer Science, Engineering, Business, or a related field. A Masters degree or an MBA would be a plus. 4+ years of experience in a product management role, ideally in a cloud-based product environment. Strong understanding of Azure services, cloud computing technologies, and SaaS platforms. Experience building features/ solutions in Cloud Management/Cloud Operations in FinOps, SecOps, Cloud IT Ops. Experience with Azure Migration, Azure CloudOps configuration, Managed Services is a plus Passion for Automation and simplification of complex IT processes Proven track record of managing all aspects of a successful product throughout its lifecycle. Excellent communication, leadership, and interpersonal skills. Ability to work in a fast-paced, dynamic environment and handle multiple tasks simultaneously. Strong problem-solving skills and the ability to thrive in a fast-paced, dynamic startup environment. Excellent written and verbal communication skills, with experience in creating customer-facing technical documentation such as user guides and release notes. Comfortable with ambiguity and able to prioritize tasks in a rapidly changing environment. Demonstrated success in defining and launching products would be a plus. Success Factors You bring curiosity and passion for continuous learning. You enjoy ideating, simplifying and solving problems that create value for customers. You have an entrepreneurial mindset. You bring a growth mindset and value collaborative approach to growing others and yourself. You have an ownership mindset to areas you take on, and you are a self-starter that can work independently when needed.

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Key Responsibilities Lead the automation testing effort of our cloud management platform. Create and maintain automation test cases and test suites. Work closely with the development team to ensure that the automation tests are integrated into the development process. Collaborate with other QA team members to identify and resolve defects. Implement automation testing best practices and continuously improve the automation testing framework. Develop and maintain automation test scripts using programming languages such as Python. Conduct performance testing using tools such as JMeter, Gatling, or Locust. Monitor and report on automation testing and performance testing progress and results. Ensure that the automation testing and performance testing strategy aligns with overall product quality goals and objectives. Manage and mentor a team of automation QA engineers. Requirements Bachelor's degree in Computer Science or a related field. Minimum of 8+ years of experience in automation testing and performance testing. Experience in leading and managing automation testing teams. Strong experience with automation testing frameworks including Robot Framework. Strong experience with programming languages, including Python. Strong understanding of software development lifecycle and agile methodologies. Experience with testing cloud-based applications. Good understanding of Cloud services & ecosystem, specifically AWS. Experience with performance testing tools such as JMeter, Gatling, or Locust. Excellent analytical and problem-solving skills. Excellent written and verbal communication skills. Ability to work independently and in a team environment. Passionate about automation testing and performance testing.

Posted 1 week ago

Apply

9.0 - 12.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description: GCP Cloud Architect Opportunity: We are seeking a highly skilled and experienced GCP Cloud Architect to join our dynamic technology team. You will play a crucial role in designing, implementing, and managing our Google Cloud Platform (GCP) infrastructure, with a primary focus on building a robust and scalable Data Lake in BigQuery. You will be instrumental in ensuring the reliability, security, and performance of our cloud environment, supporting critical healthcare data initiatives. This role requires strong technical expertise in GCP, excellent problem-solving abilities, and a passion for leveraging cloud technologies to drive impactful solutions within the healthcare domain. Responsibilities: Cloud Architecture & Design: Design and architect scalable, secure, and cost-effective GCP solutions, with a strong emphasis on BigQuery for our Data Lake. Define and implement best GCP infrastructure management, security, networking, and data governance practices. Develop and maintain comprehensive architectural diagrams, documentation, and standards. Collaborate with data engineers, data scientists, and application development teams to understand their requirements and translate them into robust cloud solutions. Evaluate and recommend new GCP services and technologies to optimize our cloud environment. Understand and implement the fundamentals of GCP, including resource hierarchy, projects, organizations, and billing. GCP Infrastructure Management: Manage and maintain our existing GCP infrastructure, ensuring high availability, performance, and security. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or Cloud Deployment Manager. Monitor and troubleshoot infrastructure issues, proactively identifying and resolving potential problems. Implement and manage backup and disaster recovery strategies for our GCP environment. Optimize cloud costs and resource utilization, including BigQuery slot management. Collaboration & Communication: Work closely with cross-functional teams, including data engineering, data science, application development, security, and compliance. Communicate technical concepts and solutions effectively to both technical and non-technical stakeholders. Provide guidance and mentorship to junior team members. Participate in on-call rotation as needed. Develop and maintain thorough and reliable documentation of all cloud infrastructure processes, configurations, and security protocols. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 5-8 years of experience in designing, implementing, and managing cloud infrastructure, with a strong focus on Google Cloud Platform (GCP). Proven experience in architecting and implementing Data Lakes on GCP, specifically using BigQuery. Hands-on experience with ETL/ELT processes and tools, with strong proficiency in Google Cloud Composer (Apache Airflow). Solid understanding of GCP services such as Compute Engine, Cloud Storage, Networking (VPC, Firewall Rules, Cloud DNS), IAM, Cloud Monitoring, and Cloud Logging. Experience with infrastructure-as-code (IaC) tools like Terraform or Cloud Deployment Manager. Strong understanding of security best practices for cloud environments, including identity and access management, data encryption, and network security. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, collaboration, and interpersonal skills. Bonus Points: Experience with Apigee for API management. Experience with containerization technologies like Docker and orchestration platforms like Cloud Run. Experience with Vertex AI for machine learning workflows on GCP. Familiarity with GCP Healthcare products and solutions (e.g., Cloud Healthcare API). Knowledge of healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR). GCP Professional Architect certification. Experience with scripting languages (e.g., Python, Bash). Experience with Looker.

Posted 2 weeks ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Dear Candidate, We are hiring a Cloud Operations Engineer to manage and optimize cloud-based environments. Ideal for engineers passionate about automation, monitoring, and cloud-native technologies. Key Responsibilities: Maintain cloud infrastructure (AWS, Azure, GCP) Automate deployments and system monitoring Ensure availability, performance, and cost optimization Troubleshoot incidents and resolve system issues Required Skills & Qualifications: Hands-on experience with cloud platforms and DevOps tools Proficiency in scripting (Python, Bash) and IaC (Terraform, CloudFormation) Familiarity with logging/monitoring tools (CloudWatch, Datadog, etc.) Bonus: Experience with Kubernetes or serverless architectures Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 weeks ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices

Posted 2 weeks ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices

Posted 2 weeks ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices

Posted 2 weeks ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices

Posted 2 weeks ago

Apply

12.0 - 15.0 years

55 - 60 Lacs

Ahmedabad, Chennai, Bengaluru

Work from Office

Naukri logo

Dear Candidate, Were hiring a Cloud Systems Integrator to connect disparate systems and ensure seamless cloud-native integrations. Key Responsibilities: Integrate SaaS, legacy, and cloud systems. Build APIs, webhooks, and message queues. Ensure data consistency across platforms. Required Skills & Qualifications: Experience with REST, GraphQL, and messaging (Kafka/SQS). Proficiency in integration platforms (MuleSoft, Boomi, etc.). Cloud-first development experience. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 3 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

The responsibilities of a Principal Solution Engineer at our Solution Engineering (SE) Hub are broad and the challenges are demanding. You will be expected to constantly deliver more - more innovation, more value for our customers, and more revenue. This position offers you an opportunity to work with prospects and customers across a variety of market segments and industries.In addition to understanding the key requirements of these businesses, you will deliver demonstrations to show the functional and technical capabilities of Oracle's Cloud solutions that meet the customer's needs. If you have a great attitude and technical acumen, passion for business and technology, willingness to thrive in an environment that is challenging yet rewarding, thought leadership and technical specialisation across multiple domains, you should consider this role at our Solution Engineering Hub. Career Level - IC4 What You'll Do As an Principal Solution Engineer at SE Hub , you will: Work alongside other solution engineers and sales representatives to support business targets by providing technical and business consulting expertise for on-premises and cloud solutions. Be responsible for discovery calls, solution architecture designs, and delivery of solution demonstrations, proof of concepts & high quality technical presentations to customers and prospects. Design, validate and present Oracle technical solutions with advanced product concepts, features and benefits, future direction, and third-party complimentary products integration. Participate & present in customer workshops, events, forums. Develop and manage customer references through high quality technical engagement and professional relationships. Collaborate with varied internal and external stakeholders, including teams from engineering, product management, partners, ISVs to design, configure, implement (PoC/ MVP) high quality solutions for highest levels of customer satisfaction. Support Senior leadership team with ideas and thoughts to drive innovation and startegic initiatives. Lead and drive some of these initaives to grow our business. Independently handle complex situations with minimal guidance. Be a role model, a mentor and a guide for younger team members. Required Skills/Experience What We Look For Graduate in Information Technology, Information Systems or equivalent About 10 year of overall experience in relevant functional or technical area Demonstrated interest and competence or working knowledge in at least two or more of the following areas: Cloud Computing - Cloud Infrastructure, Cloud Native, DevOps, CloudOps, Multi-Cloud, Hybrid-Cloud environments Databases & Data Management Data Science, Analytics & Business Intelligence Artificial Intelligence, Machine Learning, Neural Networks Application Integration Skills, PaaS4SaaS Open Source Technologies Strong personal drive and can-do attitude Excellent interpersonal skills, verbal and written English communication and active listening skills Ability to learn new products & technologies and adapt to new projects & environment quickly Ability to lead and drive projects and initiatives Good time management and organised Residing in India, open to travel occasionally

Posted 3 weeks ago

Apply

1.0 - 3.0 years

1 - 5 Lacs

Coimbatore

Work from Office

Naukri logo

*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices

Posted 3 weeks ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

About Serrala : Serrala is a fast-growing global B2B Fintech software company headquartered in Hamburg, Germany. Serrala optimizes the universe of payments for organizations that seek efficient cash visibility and secure financial processes. Serrala supports over 2,500 companies worldwide with advanced technology and personalized consulting to optimize all processes that manage incoming and outgoing paymentsfrom order-to-cash, procure-to-pay, and treasury to data and document management. Serrala is represented on three continents with 12 regional offices in Europe, North America, and Asia. With over 750 employees/Serralians dedicated to service companies of all industry sectors from medium-sized companies to global players, covering 20% of Fortune 500 and 50% of DAX 30 (German stock index) companies, Serrala continues to expand. Serrala is working on a > To achieve this vision, a new Center of Excellence was created in Pune in 2019. The Pune CoE boasts of cross functional teams that build SaaS products from scratch, onboard customers and offer post-implementation support. We are a rapidly growing organization, and this is your opportunity to join us in our growth journey. The CloudOps team provide cloud infrastructure service to ensure the availability of the Serrala products on a global scale. The CloudOps DevOps Engineer is part of CloudOps team focusing on ensuring resilient and cost effective infrastructure for over 150 customer in the Azure cloud. Daily-Business : You will be responsible for IT Infrastructure operation, including OS, database, DR, cloud billing, reservations, and cloud consumption optimization for Azure, back-end systems and external sources. In addition, your responsibilities will include : - Support the developer engineering teams with infrastructure provisioning. - Solve complex cloud infrastructure and developer operations challenges. - Enhance, monitor and manage networking components to provide strong and reliable networking, firewalls, intrusion detection and intrusion prevention tools. - Monitor and administer backups for and data resiliency to support both disaster recovery plan and business continuity plan. - Ensure workloads are protected. - Provide problem management of infrastructure related incidents, escalated from our customer experience team. - Develop and maintain documentation and enhance the knowledge management of client and internal systems and processes. - Support the security engineering team with cyber defense tools to detect and remediate vulnerabilities, and ensure that the infrastructure is in line with the standards. You come with : - Strong knowledge of Terraform or other type of IAC tools. - Strong familiarity with Cloud Infrastructure concepts (preferrable Azure. - Knowledge of networking concepts and public cloud. - Proven practical experience about Kubernetes administration and related PaaS AKS, EKS as well as serverless computing. - Knowledge of SFTP. - Experience with scalable networking technologies and web standards (e.g, REST APIs, web security mechanisms, etc.) - Experience with Powershell, Bash and YAML scripting and a strong passion for automation. - Experience with deployment and orchestration technologies (e.g, Ansible, Chef, Jenkins). - Experience in system administration tasks in Windows, Linux or Unix, with familiarity with standard IT security practices (e.g, encryption, certificates, key management, patch management etc. - Experience designing, building, and deploying scalable cloud-based solution architectures is a plus. - Understanding of open source server software (e.g, NGINX, RabbitMQ, Redis, and Elasticsearch) is a plus. - Proficiency using monitoring tools such as Icinga, Kibana, Grafana, Opsgenie. - Familiarity with cyber security best practices is a plus. - Any knowledge of frameworks such as Cyber Essentials, CIS Control or ISO27001 is desirable. - Strong organizational skills and detail-oriented. - Effective communication skills in order to engage with diverse stakeholders and present information concisely. - Fluent in English.ApplyInsightsFollow-upSave this job for future referenceDid you find something suspiciousReport Here! Hide This JobClick here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

9 - 12 years

12 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Overview We are seeking an Associate Manager Data IntegrationOps to support and assist in managing data integration and operations (IntegrationOps) programs within our growing data organization. In this role, you will help maintain and optimize data integration workflows, ensure data reliability, and support operational excellence. This position requires a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Support the management of Data IntegrationOps programs by assisting in aligning with business objectives, data governance standards, and enterprise data strategies. Monitor and enhance data integration platforms by implementing real-time monitoring, automated alerting, and self-healing capabilities to help improve uptime and system performance under the guidance of senior team members. Assist in developing and enforcing data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Support the standardization and automation of data integration workflows, including report generation and dashboard refreshes. Collaborate with cross-functional teams to help optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. Provide assistance in Data & Analytics technology transformations by supporting full sustainment capabilities, including data platform management and proactive issue identification with automated solutions. Contribute to promoting a data-first culture by aligning with PepsiCos Data & Analytics program and supporting global data engineering efforts across sectors. Support continuous improvement initiatives to help enhance the reliability, scalability, and efficiency of data integration processes. Engage with business and IT teams to help identify operational challenges and provide solutions that align with the organizations data strategy. Develop technical expertise in ETL/ELT processes, cloud-based data platforms, and API-driven data integration, working closely with senior team members. Assist with monitoring, incident management, and troubleshooting in a data operations environment to ensure smooth daily operations. Support the implementation of sustainable solutions for operational challenges by helping analyze root causes and recommending improvements. Foster strong communication and collaboration skills, contributing to effective engagement with cross-functional teams and stakeholders. Demonstrate a passion for continuous learning and adapting to emerging technologies in data integration and operations. Responsibilities Support and maintain data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Assist in developing API-driven data integration solutions using REST APIs and Kafka to ensure seamless data movement across platforms. Contribute to the deployment and management of cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, working closely with the team. Help automate data pipelines and participate in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins. Monitor system reliability using observability tools such as Splunk, Grafana, Prometheus, and other custom monitoring solutions, reporting issues as needed. Assist in end-to-end data integration operations by testing and monitoring processes to maintain service quality and support global products and projects. Support the day-to-day operations of data products, ensuring SLAs are met and assisting in collaboration with SMEs to fulfill business demands. Support incident management processes, helping to resolve service outages and ensuring the timely resolution of critical issues. Assist in developing and maintaining operational processes to enhance system efficiency and resilience through automation. Collaborate with cross-functional teams like Data Engineering, Analytics, AI/ML, CloudOps, and DataOps to improve data reliability and contribute to data-driven decision-making. Work closely with teams to troubleshoot and resolve issues related to cloud infrastructure and data services, escalating to senior team members as necessary. Support building and maintaining relationships with internal stakeholders to align data integration operations with business objectives. Engage directly with customers, actively listening to their concerns, addressing challenges, and helping set clear expectations. Promote a customer-centric approach by contributing to efforts that enhance the customer experience and empower the team to advocate for customer needs. Assist in incorporating customer feedback and business priorities into operational processes to ensure continuous improvement. Contribute to the work intake and Agile processes for data platform teams, ensuring operational excellence through collaboration and continuous feedback. Support the execution of Agile frameworks, helping drive a culture of adaptability, efficiency, and learning within the team. Help align the team with a shared vision, ensuring a collaborative approach while contributing to a culture of accountability. Mentor junior technical team members, supporting their growth and ensuring adherence to best practices in data integration. Contribute to resource planning by helping assess team capacity and ensuring alignment with business objectives. Remove productivity barriers in an agile environment, assisting the team to shift priorities as needed without compromising quality. Support continuous improvement in data integration processes by helping evaluate and suggest optimizations to enhance system performance. Leverage technical expertise in cloud and computing technologies to support business goals and drive operational success. Stay informed on emerging trends and technologies, helping bring innovative ideas to the team and supporting ongoing improvements in data operations. Qualifications 9+ years of technology work experience in a large-scale, global organization CPG (Consumer Packaged Goods) industry preferred. 4+ years of experience in Data Integration, Data Operations, and Analytics, supporting and maintaining enterprise data platforms. 4+ years of experience working in cross-functional IT organizations, collaborating with teams such as Data Engineering, CloudOps, DevOps, and Analytics. 1+ years of leadership/management experience supporting technical teams and contributing to operational efficiency initiatives. 4+ years of hands-on experience in monitoring and supporting SAP BW processes for data extraction, transformation, and loading (ETL). Managing Process Chains and Batch Jobs to ensure smooth data load operations and identifying failures for quick resolution. Debugging and troubleshooting data load failures and performance bottlenecks in SAP BW systems. Validating data consistency and integrity between source systems and BW targets. Strong understanding of SAP BW architecture, InfoProviders, DSOs, Cubes, and MultiProviders. Knowledge of SAP BW process chains and event-based triggers to manage and optimize data loads. Exposure to SAP BW on HANA and knowledge of SAPs modern data platforms. Basic knowledge of integrating SAP BW with other ETL/ELT tools like Informatica IICS, PowerCenter, DDH, and Azure Data Factory. Knowledge of ETL/ELT tools such as Informatica IICS, PowerCenter, Teradata, and Azure Data Factory. Hands-on knowledge of cloud-based data integration platforms such as Azure Data Services, AWS Redshift, Snowflake, and Google BigQuery. Familiarity with API-driven data integration (e.g., REST APIs, Kafka), and supporting cloud-based data pipelines. Basic proficiency in Infrastructure-as-Code (IaC) tools such as Terraform, GitOps, Kubernetes, and Jenkins for automating infrastructure management. Understanding of Site Reliability Engineering (SRE) principles, with a focus on proactive monitoring and process improvements. Strong communication skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Ability to effectively advocate for customer needs and collaborate with teams to ensure alignment between business and technical solutions. Interpersonal skills to help build relationships with stakeholders across both business and IT teams. Customer Obsession: Enthusiastic about ensuring high-quality customer experiences and continuously addressing customer needs. Ownership Mindset: Willingness to take responsibility for issues and drive timely resolutions while maintaining service quality. Ability to support and improve operational efficiency in large-scale, mission-critical systems. Some experience leading or supporting technical teams in a cloud-based environment, ideally within Microsoft Azure. Able to deliver operational services in fast-paced, transformation-driven environments. Proven capability in balancing business and IT priorities, executing solutions that drive mutually beneficial outcomes. Basic experience with Agile methodologies, and an ability to collaborate effectively across virtual teams and different functions. Understanding of master data management (MDM), data standards, and familiarity with data governance and analytics concepts. Openness to learning new technologies, tools, and methodologies to stay current in the rapidly evolving data space. Passion for continuous improvement and keeping up with trends in data integration and cloud technologies.

Posted 1 month ago

Apply

5 - 7 years

7 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Overview We are seeking an Associate Manager Data IntegrationOps to support and assist in managing data integration and operations (IntegrationOps) programs within our growing data organization. In this role, you will help maintain and optimize data integration workflows, ensure data reliability, and support operational excellence. This position requires a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Support the management of Data IntegrationOps programs by assisting in aligning with business objectives, data governance standards, and enterprise data strategies. Monitor and enhance data integration platforms by implementing real-time monitoring, automated alerting, and self-healing capabilities to help improve uptime and system performance under the guidance of senior team members. Assist in developing and enforcing data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Support the standardization and automation of data integration workflows, including report generation and dashboard refreshes. Collaborate with cross-functional teams to help optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. Provide assistance in Data & Analytics technology transformations by supporting full sustainment capabilities, including data platform management and proactive issue identification with automated solutions. Contribute to promoting a data-first culture by aligning with PepsiCos Data & Analytics program and supporting global data engineering efforts across sectors. Support continuous improvement initiatives to help enhance the reliability, scalability, and efficiency of data integration processes. Engage with business and IT teams to help identify operational challenges and provide solutions that align with the organizations data strategy. Develop technical expertise in ETL/ELT processes, cloud-based data platforms, and API-driven data integration, working closely with senior team members. Assist with monitoring, incident management, and troubleshooting in a data operations environment to ensure smooth daily operations. Support the implementation of sustainable solutions for operational challenges by helping analyze root causes and recommending improvements. Foster strong communication and collaboration skills, contributing to effective engagement with cross-functional teams and stakeholders. Demonstrate a passion for continuous learning and adapting to emerging technologies in data integration and operations. Responsibilities Support and maintain data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Assist in developing API-driven data integration solutions using REST APIs and Kafka to ensure seamless data movement across platforms. Contribute to the deployment and management of cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, working closely with the team. Help automate data pipelines and participate in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins. Monitor system reliability using observability tools such as Splunk, Grafana, Prometheus, and other custom monitoring solutions, reporting issues as needed. Assist in end-to-end data integration operations by testing and monitoring processes to maintain service quality and support global products and projects. Support the day-to-day operations of data products, ensuring SLAs are met and assisting in collaboration with SMEs to fulfill business demands. Support incident management processes, helping to resolve service outages and ensuring the timely resolution of critical issues. Assist in developing and maintaining operational processes to enhance system efficiency and resilience through automation. Collaborate with cross-functional teams like Data Engineering, Analytics, AI/ML, CloudOps, and DataOps to improve data reliability and contribute to data-driven decision-making. Work closely with teams to troubleshoot and resolve issues related to cloud infrastructure and data services, escalating to senior team members as necessary. Support building and maintaining relationships with internal stakeholders to align data integration operations with business objectives. Engage directly with customers, actively listening to their concerns, addressing challenges, and helping set clear expectations. Promote a customer-centric approach by contributing to efforts that enhance the customer experience and empower the team to advocate for customer needs. Assist in incorporating customer feedback and business priorities into operational processes to ensure continuous improvement. Contribute to the work intake and Agile processes for data platform teams, ensuring operational excellence through collaboration and continuous feedback. Support the execution of Agile frameworks, helping drive a culture of adaptability, efficiency, and learning within the team. Help align the team with a shared vision, ensuring a collaborative approach while contributing to a culture of accountability. Mentor junior technical team members, supporting their growth and ensuring adherence to best practices in data integration. Contribute to resource planning by helping assess team capacity and ensuring alignment with business objectives. Remove productivity barriers in an agile environment, assisting the team to shift priorities as needed without compromising quality. Support continuous improvement in data integration processes by helping evaluate and suggest optimizations to enhance system performance. Leverage technical expertise in cloud and computing technologies to support business goals and drive operational success. Stay informed on emerging trends and technologies, helping bring innovative ideas to the team and supporting ongoing improvements in data operations. Qualifications 5+ years of technology work experience in a large-scale, global organization CPG (Consumer Packaged Goods) industry preferred. 4+ years of experience in Data Integration, Data Operations, and Analytics, supporting and maintaining enterprise data platforms. 4+ years of experience working in cross-functional IT organizations, collaborating with teams such as Data Engineering, CloudOps, DevOps, and Analytics. 3+ years of hands-on experience in MQ & WebLogic administration. 1+ years of leadership/management experience supporting technical teams and contributing to operational efficiency initiatives. Knowledge of ETL/ELT tools such as Informatica IICS, PowerCenter, SAP BW, Teradata, and Azure Data Factory. Hands-on knowledge of cloud-based data integration platforms such as Azure Data Services, AWS Redshift, Snowflake, and Google BigQuery. Familiarity with API-driven data integration (e.g., REST APIs, Kafka), and supporting cloud-based data pipelines. Basic proficiency in Infrastructure-as-Code (IaC) tools such as Terraform, GitOps, Kubernetes, and Jenkins for automating infrastructure management. Understanding of Site Reliability Engineering (SRE) principles, with a focus on proactive monitoring and process improvements. Strong communication skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Ability to effectively advocate for customer needs and collaborate with teams to ensure alignment between business and technical solutions. Interpersonal skills to help build relationships with stakeholders across both business and IT teams. Customer Obsession: Enthusiastic about ensuring high-quality customer experiences and continuously addressing customer needs. Ownership Mindset: Willingness to take responsibility for issues and drive timely resolutions while maintaining service quality. Ability to support and improve operational efficiency in large-scale, mission-critical systems. Some experience leading or supporting technical teams in a cloud-based environment, ideally within Microsoft Azure. Able to deliver operational services in fast-paced, transformation-driven environments. Proven capability in balancing business and IT priorities, executing solutions that drive mutually beneficial outcomes. Basic experience with Agile methodologies, and an ability to collaborate effectively across virtual teams and different functions. Understanding of master data management (MDM), data standards, and familiarity with data governance and analytics concepts. Openness to learning new technologies, tools, and methodologies to stay current in the rapidly evolving data space. Passion for continuous improvement and keeping up with trends in data integration and cloud technologies.

Posted 1 month ago

Apply

4 - 9 years

20 - 35 Lacs

Pune

Work from Office

Naukri logo

Greetings for the Day!!! Scouting for a Devsecops Specialist to be associated with a Global IT Service based organization. Location: Pune Shift: 2 PM to 11 PM Min exp: 3 years Role & responsibilities: Role will focus on designing and building well-architected solutions for deploying, securing, and maintaining new and existing applications, services, and infrastructure for managed appliances and cloud platforms. This role will coordinate closely with other functional teams to ensure smooth, efficient, and predictable delivery of engineering output. Candidates should be familiar with current cloud and containerization technologies, system architectures, CI/CD pipeline toolchains, and automation tools and technologies. Candidate will support engineering teams across the organization in their efforts to produce resilient cloud and containerized solutions. Perform engineering activities such as developing infrastructure-as-code, automation tooling, or other software-based solutions. Perform operational maintenance activities including platform upgrades, server patching, monitoring, configuration, troubleshooting, and documentation. Contribute to strategy discussions and decisions on cloud and container infrastructure as-a-service design and best approach for implementing those solutions. Participate in 24/7 support as part of an on-call rotation team that responds to after-hours infrastructure alerts. Participate in the maintenance and enforcement of critical cloud policies including Security, Service Level. Assist or mentor other less experienced team members Preferred candidate profile Degree in Computer Science, Management Information Systems, or similar IT related discipline, or equivalent work experience Advanced knowledge of one of DevOps, CloudOps, or containerization concepts and tooling, growing knowledge in the other domains Demonstrated systems, cloud, or container administration and troubleshooting experience Advanced knowledge of CI/CD Pipelines and software engineering tooling including Jenkins or similar build server tools, git, Jira, Kubernetes, Docker. Interested candidates kindly share your updated resume to james.lobo@mappyresources.com

Posted 1 month ago

Apply

5 - 10 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Overview We are seeking an Associate Manager Data IntegrationOps to support and assist in managing data integration and operations (IntegrationOps) programs within our growing data organization. In this role, you will help maintain and optimize data integration workflows, ensure data reliability, and support operational excellence. This position requires a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Support the management of Data IntegrationOps programs by assisting in aligning with business objectives, data governance standards, and enterprise data strategies. Monitor and enhance data integration platforms by implementing real-time monitoring, automated alerting, and self-healing capabilities to help improve uptime and system performance under the guidance of senior team members. Assist in developing and enforcing data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Support the standardization and automation of data integration workflows, including report generation and dashboard refreshes. Collaborate with cross-functional teams to help optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. Provide assistance in Data & Analytics technology transformations by supporting full sustainment capabilities, including data platform management and proactive issue identification with automated solutions. Contribute to promoting a data-first culture by aligning with PepsiCos Data & Analytics program and supporting global data engineering efforts across sectors. Support continuous improvement initiatives to help enhance the reliability, scalability, and efficiency of data integration processes. Engage with business and IT teams to help identify operational challenges and provide solutions that align with the organizations data strategy. Develop technical expertise in ETL/ELT processes, cloud-based data platforms, and API-driven data integration, working closely with senior team members. Assist with monitoring, incident management, and troubleshooting in a data operations environment to ensure smooth daily operations. Support the implementation of sustainable solutions for operational challenges by helping analyze root causes and recommending improvements. Foster strong communication and collaboration skills, contributing to effective engagement with cross-functional teams and stakeholders. Demonstrate a passion for continuous learning and adapting to emerging technologies in data integration and operations. Responsibilities Support and maintain data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Assist in developing API-driven data integration solutions using REST APIs and Kafka to ensure seamless data movement across platforms. Contribute to the deployment and management of cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, working closely with the team. Help automate data pipelines and participate in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins. Monitor system reliability using observability tools such as Splunk, Grafana, Prometheus, and other custom monitoring solutions, reporting issues as needed. Assist in end-to-end data integration operations by testing and monitoring processes to maintain service quality and support global products and projects. Support the day-to-day operations of data products, ensuring SLAs are met and assisting in collaboration with SMEs to fulfill business demands. Support incident management processes, helping to resolve service outages and ensuring the timely resolution of critical issues. Assist in developing and maintaining operational processes to enhance system efficiency and resilience through automation. Collaborate with cross-functional teams like Data Engineering, Analytics, AI/ML, CloudOps, and DataOps to improve data reliability and contribute to data-driven decision-making. Work closely with teams to troubleshoot and resolve issues related to cloud infrastructure and data services, escalating to senior team members as necessary. Support building and maintaining relationships with internal stakeholders to align data integration operations with business objectives. Engage directly with customers, actively listening to their concerns, addressing challenges, and helping set clear expectations. Promote a customer-centric approach by contributing to efforts that enhance the customer experience and empower the team to advocate for customer needs. Assist in incorporating customer feedback and business priorities into operational processes to ensure continuous improvement. Contribute to the work intake and Agile processes for data platform teams, ensuring operational excellence through collaboration and continuous feedback. Support the execution of Agile frameworks, helping drive a culture of adaptability, efficiency, and learning within the team. Help align the team with a shared vision, ensuring a collaborative approach while contributing to a culture of accountability. Mentor junior technical team members, supporting their growth and ensuring adherence to best practices in data integration. Contribute to resource planning by helping assess team capacity and ensuring alignment with business objectives. Remove productivity barriers in an agile environment, assisting the team to shift priorities as needed without compromising quality. Support continuous improvement in data integration processes by helping evaluate and suggest optimizations to enhance system performance. Leverage technical expertise in cloud and computing technologies to support business goals and drive operational success. Stay informed on emerging trends and technologies, helping bring innovative ideas to the team and supporting ongoing improvements in data operations. Qualifications 5+ years of technology work experience in a large-scale, global organization CPG (Consumer Packaged Goods) industry preferred. 4+ years of experience in Data Integration, Data Operations, and Analytics, supporting and maintaining enterprise data platforms. 4+ years of experience working in cross-functional IT organizations, collaborating with teams such as Data Engineering, CloudOps, DevOps, and Analytics. 3+ years of hands-on experience in MQ & WebLogic administration. 1+ years of leadership/management experience supporting technical teams and contributing to operational efficiency initiatives. Knowledge of ETL/ELT tools such as Informatica IICS, PowerCenter, SAP BW, Teradata, and Azure Data Factory. Hands-on knowledge of cloud-based data integration platforms such as Azure Data Services, AWS Redshift, Snowflake, and Google BigQuery. Familiarity with API-driven data integration (e.g., REST APIs, Kafka), and supporting cloud-based data pipelines. Basic proficiency in Infrastructure-as-Code (IaC) tools such as Terraform, GitOps, Kubernetes, and Jenkins for automating infrastructure management. Understanding of Site Reliability Engineering (SRE) principles, with a focus on proactive monitoring and process improvements. Strong communication skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Ability to effectively advocate for customer needs and collaborate with teams to ensure alignment between business and technical solutions. Interpersonal skills to help build relationships with stakeholders across both business and IT teams. Customer Obsession: Enthusiastic about ensuring high-quality customer experiences and continuously addressing customer needs. Ownership Mindset: Willingness to take responsibility for issues and drive timely resolutions while maintaining service quality. Ability to support and improve operational efficiency in large-scale, mission-critical systems. Some experience leading or supporting technical teams in a cloud-based environment, ideally within Microsoft Azure. Able to deliver operational services in fast-paced, transformation-driven environments. Proven capability in balancing business and IT priorities, executing solutions that drive mutually beneficial outcomes. Basic experience with Agile methodologies, and an ability to collaborate effectively across virtual teams and different functions. Understanding of master data management (MDM), data standards, and familiarity with data governance and analytics concepts. Openness to learning new technologies, tools, and methodologies to stay current in the rapidly evolving data space. Passion for continuous improvement and keeping up with trends in data integration and cloud technologies.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies