Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role At Orange Business, Digital Technologys purpose is To be a trusted technology business partner, delivering outstanding digital experiences that amaze our customers, partners and employees Keeping customer at the center of this digital transformation, we are simplifying, transforming and standardizing processes/ways of working and migrating service desks to ServiceNow, pushing automating across the workflows. Head of NetDevOps is A NetDevOps Lead is responsible for driving the adoption and implementation of DevOps practices within the networking domain of an organization. Should lead design and automation of network infrastructure using modern tools and methodologies such as Infrastructure as Code (IaC), continuous integration/continuous deployment (CI/CD), and configuration management. The NetDevOps Lead oversees the creation of scalable, repeatable workflows for provisioning, monitoring, and managing network devices and services. They work closely with network engineers, software developers, and operations teams to ensure that network changes are tested, version-controlled, and deployed safely and efficiently across LAN, WAN, SDWAN, Internet and Voice products. In addition to technical execution, the NetDevOps Lead plays a strategic role in transforming legacy network operations into agile, software-driven environments. They define best practices, enforce compliance, and ensure that networking solutions align with business and security requirements. The role involves mentoring teams, introducing automation and observability tools, and fostering a culture of collaboration between networking, cloud, and IT teams. By bridging the gap between traditional network operations and agile development, the NetDevOps Lead helps accelerate innovation, reduce downtime, and improve service delivery across the enterprise or service provider environment. Develop a trust relationship with the internal stakeholders and customers based on common objectives and plan, build the partnership from the current support model and drive business based on future proof technology and effective cost structure. Drive standardization and effective solution implementation, review the current capabilities and define the upskilling (e.g. product certification, devops, automation, security) and development plan, attract talent from the market and people who can drive the transformation and standardization, the relationship with business and customers. Inspire, organize and develop the team to become self-organized and operate smoothly in the SAFE setup. Work in partnership with the selected System Integrator and Product vendors to build a strong reliable ecosystem for the internal stakeholders and customers, collaboration for the roadmap delivery, transforming the teams and addressing the difficulties that will appear on the way. Being part of Digital Technology and working collaboratively with stakeholders, this role/migration plays a strategic role in OBs Digital transformation journey. This will enable OB to simplify, modernize, automate change implementation and provide digital services to B2B customers, further supporting the revenues growth and IT services development, co-driving Marketing and GDO digital roadmap. About you Strategy & Leadership Define and implement the NetDevOps strategy in alignment with the organizations Technology and Architecture roadmap and business objectives. Establish a roadmap for automation, tooling, processes, and team development to modernize network operations for all large customers in Orange Business. Build, Collaborate and lead a cross-functional team of network automation engineers, NetDevOps specialists, and infrastructure developers both with in Digital and Technology team, Operations, Marketing and our partners. Drive organizational change by championing NetDevOps culture, practices, and mindset across networking and operations teams. Architecture & Delivery In close collaboration with Architecture, NewCo, CurrentCo, Operations, Marketing and Data and AI team, Design and deliver a scalable, reliable, and programmable network platform using Infrastructure as Code (IaC), API-based configurations, and orchestration frameworks. Leveraging the framework from Platform Engineering team, build and maintain robust CI/CD pipelines for network infrastructure, ensuring safe and repeatable changes to production environments. Lead the implementation of change-driven automation in network operations across large customers. Oversee integration of NetDevOps practices with ServiceNow, CloudOps, DevOps, and SecOps. Tooling & Platforms Select and implement leading-edge tools such as: Governance, Security & Compliance Establish best practices for code quality, testing, and change control in network automation. Collaborate with Security, Compliance, and Risk teams to ensure all network changes follow enterprise policies and regulatory frameworks. Define metrics, observability standards, and SLAs for network automation performance. Stakeholder Management Actively participate and drive discussions with other applications/application leads/cross functional teams for required integrations with ServiceNow. Partner with Infrastructure, Cloud, Security, Application Development, and Service Management teams to enable business agility and platform resilience. Present NetDevOps progress, KPIs, and roadmap to executive leadership and stakeholders. Gain trust from GDO, wider Digital Technology teams, Marketing and SI Partners Change Management: Change is always accompanied with challenges across the organization. Develop and execute change management plans to ensure smooth transitions and adoption of new service management practices across the organization. Challenge the status quo and convince teams on productivity improvements with changes processes, workflows and integrations. Quality & Risk Management: Oversee quality assurance and risk management processes to ensure that the migration is delivered on time, within scope, and to the highest standards. Ensure E2E Quality testing in place, support UAT and production deployment. Continuous Improvement: Post initial rollouts, lead initiatives to reflect and drive improvements iteratively. Lessons leant should be documented and shared with wider stakeholders. Show more Show less
Posted 1 day ago
7.0 - 9.0 years
5 - 5 Lacs
Thiruvananthapuram
Work from Office
1. Production monitoring and troubleshooting in on Prem ETL and AWS environment 2. Working experience using ETL Datastage along with DB2 3. Awareness to use tools such as Dynatrace, Appdynamics, Postman , AWS CICD 4. Software code development experience in ETL batch processing and AWS cloud 5. Software code management, repository updates and reuse 6. Implementation and/or configuration, management, and maintenance of software 7. Implementation and configuration of SaaS and public, private and hybrid cloud-based PaaS solutions 8. Integration of SaaS and PaaS solutions with Data Warehouse Application Systems including SaaS and PaaS upgrade management 9. Configuration, Maintenance and support for entire DWA Application Systems landscape including but not limited to supporting DWA Application Systems components and tasks required to deliver business processes and functionally (e.g., logical layers of databases, data marts, logical and physical data warehouses, middleware, interfaces, shell scripts, massive data transfer and uploads, web development, mobile app development, web services and APIs) 10. DWA Application Systems support for day-to-day changes and business continuity and for addressing key business, regulatory, legal or fiscal requirements 11. Support for all Third-party specialized DWA Application Systems 12. DWA Application Systems configuration and collaboration with infrastructure service supplier required to provide application access to external/third parties 13. Integration with internal and external systems (e.g., direct application interfaces, logical middleware configuration and application program interface (API) use and development) 14. Collaboration with third party suppliers such as infrastructure service supplier and enterprise public cloud providers 15. Documentation and end user training of new functionality 16. All activities required to support business process application functionality and to deliver the required application and business functions to End Users in an integrated service delivery model across the DWA Application Development lifecycle (e.g., plan, deliver, run) . Maintain data quality and run batch schedules , Operations and Maintenance 17. Deploy code to all the environments (Prod, UAT, Performance, SIT etc.) 18. Address all open tickets within the SLA CDK (Typescript) CFT (YAML) Nice to have GitHub Scripting -Bash/SH Security minded/best practices known Python Databricks & Snowflake Required Skills Databricks,Datastage,CloudOps,production support
Posted 3 days ago
12.0 - 18.0 years
1 - 3 Lacs
Hyderabad, Coimbatore
Work from Office
Experience needed: 12+ years Type: Full-Time Mode: WFO Shift: General Shift IST Location: Hyderabad (or) Coimbatore NP: Immediate Joinee - 30 days Job Summary: We are looking for a seasoned Cloud Architect with over 12 years of IT experience, including at least 6 years in cloud technologies. The role involves designing and implementing cloud architectures on Azure or AWS, managing DevSecOps/Platform Engineering/CloudOps principles, and collaborating with cross-functional teams. The ideal candidate will have strong expertise in IaaS, PaaS, datacentre migration, and cloud governance. Responsibilities: Manage the adoption lifecycle of DevSecOps/Platform Engineering/CloudOps principles within the cloud environment, including containerization and build/release management. Collaborate with cross-functional teams to design and implement cloud-based solutions. Develop technical proposals for cloud solutions, accurately estimating effort and cost. Implement observability tools and manage cloud service consumption plans. Design, develop, and implement cloud architectures on Azure/AWS, following best practices and standards. Integrate cloud solutions with existing infrastructure and tools. Implement and manage CloudOps practices for optimized cloud resource utilization. Work closely with cross-functional teams to deliver projects on time and within budget. Provide technical expertise and guidance to other engineers. Participate in knowledge sharing and continuous improvement initiatives. Maintain engineering knowledge base and templates. Requirements: Minimum of 12+ years of IT experience, with at least 6 years focused on cloud technologies. Proven experience in cloud architecture design and implementation on Azure or AWS platforms. In-depth knowledge of IaaS and PaaS services offered by Azure or AWS. Expertise in datacentre migration strategies and best practices. Strong understanding of DevSecOps principles, including containerization technologies (e.g., Docker, Kubernetes) and build/release management tools. Experience with CloudOps methodologies for efficient cloud resource management. Proficient in Terraform automation or other cloud-native infrastructure provisioning tools. Hands-on experience with Azure AD, MFA, and SSO for user access and identity management. Familiarity with the Microsoft/AWS Well-Architected Framework for cloud architecture design. Experience in end-to-end cloud proposal management in T&M model or fixed bid engagement with cost analysis. Good understanding of cloud governance models.
Posted 3 days ago
2.0 - 4.0 years
5 - 11 Lacs
Bengaluru
Work from Office
Principal Responsibilities: Cloud Infrastructure Management: Oversee the deployment, configuration, and maintenance of cloud infrastructure, including virtual machines, storage, and networking components Monitoring and Troubleshooting: Implement monitoring solutions to ensure the availability and performance of cloud services. Troubleshoot and resolve issues related to cloud infrastructure Security and Compliance: Ensure the security and compliance of cloud environments by implementing best practices and adhering to industry standards Automation and Optimization: Develop and maintain automation scripts to streamline cloud operations and improve efficiency Collaboration and Support: Work closely with cross-functional teams, including development, security, and operations, to support cloud initiatives and projects Documentation and Reporting: Maintain comprehensive documentation of cloud infrastructure, processes, and procedures. Generate regular reports on cloud operations and performance Qualifications: Education: Bachelor`s degree in Computer Science, Information Technology, or a related field. Experience: Minimum of 3-5 years of experience in cloud operations, with a strong understanding of cloud platforms such as AWS, Azure, or Google Cloud Technical Skills: Proficiency in cloud infrastructure management, automation tools (e.g., Terraform, Ansible), and monitoring solutions (e.g., CloudWatch, Prometheus) Security Knowledge: Familiarity with cloud security best practices and compliance standards Problem-Solving: Strong analytical and problem-solving skills with the ability to troubleshoot complex issues. Communication: Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Cloud Services: Strong understanding and experience in developing and troubleshooting various technologies including Cloud and Systems automation, enterprise network communications/protocols, Windows and Linux authentication and authorization, Virtualization technologies (AWS, Azure, VMWare), enterprise networking, encryption standards, technologies, and related elements IT Service Management: Experience with Web Service Integrations between ServiceNow and other cloud or on-premises systems Understanding of ITIL practices including Change and Incident Management
Posted 4 days ago
3.0 - 9.0 years
13 - 18 Lacs
Hyderabad
Work from Office
At Skillsoft , we are all about making work matter. We believe every team member has the potential to be AMAZING. We are bold, sharp, driven and most of all, real. Join us in our quest to democratize learning and help individuals unleash their edge. OVERVIEW: Skillsoft is looking for a Sr. Infrastructure Engineer who is specialized in AWS technologies. Join our operations team within Deployment and Reliability Engineering where you will work closely with peers spanning every imaginable technology discipline. OPPORTUNITY HIGHLIGHTS: Be part of a team where your contribution can extend far beyond your job role Work on the latest AWS Services Support systems in the Public Cloud SKILLS & QUALIFICATIONS: Minimum of 5 years experience in IT/Cloud Ops or related field. AWS EKS or Kubernetes experience Experience upgrading administrative containers Understanding of Infrastructure as Code ( IaC ) Experience with Terraform or other IaC software tools Experience with CrossPlane , Helm. Experience with GitHub Experience with AWS services, EC2, RDS, EKS, S3, EFS, EBS, Cloud Watch Familiarity with AWS Reserved Instance types and Savings P lans Familiarity with AWS backup Perform daily checks and maintenance activities Production of usage, trending and inventory reports relating to the on Prem and AWS Cloud Documentation of procedures Install and check for security patches with Prisma or other security tools Post-secondary education in a related field or an equivalent combination of training and experience Familiar with AWS Backup Knowledge of Windows Server 2022 , 2025 and 201 9 operating systems Knowledge and experience of RHEL 9 & Amazon Linux 2 023 Knowledge of enterprise systems performance monitoring and administration tools Excellent communication, problem-solving and analytical skills Professional manner with strong interpersonal skills; a team player and a relationship builder Self-directed work habits, applied with creativity, resourcefulness, and a sense of personal responsibility Ability to communicate information and ideas clearly and concisely, orally and in writing Ability to think strategically and plan for innovation/change; flexibility/adaptability Excellent listening skills AWS certification is desired . SUCCESS QUALITIES: Personally Accountable for Team Success . We unleash our edge together. Confident Achievers . We are bold. Intellectually Curious . We are sharp . Adaptable, Agile & Resilient . We are driven. Customer First . We are real.
Posted 6 days ago
7.0 - 9.0 years
27 - 37 Lacs
Pune
Hybrid
Responsibilities may include the following and other duties may be assigned: Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks. Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation. Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints. Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling. Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines. Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments. Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations. Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers. Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies. Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage. Automate AWS housekeeping and operational tasks such as: Cleanup of unused EBS Volumes, snapshots, old AMIs Rotation of secrets and credentials using secrets manager Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages. Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements. Required Knowledge and Experience: 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration. Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups. Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing. Deep hands-on experience with AWS Services, including Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC Data Services: Athena, Glue, MSK, Redshift Security: KMS, IAM, Config, CloudTrail, Secrets Manager Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless Working knowledge of Databricks, including: Cluster and workspace management, job orchestration Integration with AWS Storage and identity (IAM passthrough) Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline. Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup. Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services. Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management. Knowledge of cost optimization strategies across compute, storage, and network layers. Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR) Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.) Preferred Qualifications: AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications. Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS. Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards. Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities. Prior experience in supporting high-availability production environments with disaster recovery and failover architectures. Understanding of Zero Trust architecture and security best practices in cloud-native environments. Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel. Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance. Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams. If interested, please share below details on ashwini.ukekar@medtronic.com Name: Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice Period: Current Company: Current Designation: Current Location: Regards , Ashwini Ukekar Sourcing Specialist
Posted 1 week ago
4.0 - 6.0 years
10 - 13 Lacs
Bengaluru
Hybrid
entomo is an Equal Opportunity Employer. The company promotes and supports a diverse workforce at all levels across the Company. The Company ensures that its associates or potential hires, third-party support staff and suppliers are not discriminated against, directly or indirectly, as a result of their colour, creed, cast, race, nationality, ethnicity or national origin, marital status, pregnancy, age, disability, religion or similar philosophical belief, sexual orientation, gender or gender reassignment, etc. Summary: We are seeking a skilled Site Reliability Engineer (SRE) to join our team. In this role, you will be responsible for bridging the gap between development and operations by applying software engineering principles to infrastructure and operations tasks. Your primary focus will be ensuring the reliability, availability, performance, and scalability of our production systems while minimizing manual operational work through automation and enhancing system resilience. Position Overview: The Site Reliability Engineer will work closely with development and operations teams to design, implement, and maintain highly reliable systems. You will be instrumental in establishing best practices for observability, incident response, and infrastructure management. Your expertise will help reduce operational overhead, improve system performance, and ensure seamless deployments through CI/CD pipelines. Qualifications Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or equivalent practical experience 3+ years of experience in SRE, DevOps or similar roles Strong proficiency with Kubernetes (K8s) and Docker containerization Experience with the ELK stack (Elasticsearch, Logstash, Kibana) for logging and monitoring Good to have : Understanding of Java programming and troubleshooting Java applications Working knowledge of SQL and MongoDB databases Familiarity with Angular for frontend monitoring and diagnostic tooling Strong understanding of system architecture, cloud infrastructure, and networking Experience with Infrastructure as Code (IaC) tools (e.g., Terraform, Ansible) Demonstrated experience with monitoring and observability platforms Excellent problem-solving skills and ability to troubleshoot complex systems Outstanding verbal and written communication skills Preferred Skills: Must Have : Experience with public AWS cloud platforms. Good to have knowledge and experience in Azure, GCP. Knowledge of CI/CD tools (Jenkins, GitLab CI, GitHub Actions) Familiarity with service mesh technologies (e.g., Istio) Experience with scripting languages (Python, Bash) Understanding of distributed systems and microservices architecture Experience implementing SLOs, SLIs, and SLAs Knowledge of security best practices Certification in relevant technologies (CKA, AWS, etc.) Roles and Responsibilities: System Reliability and Performance Design, implement, and maintain highly available and scalable infrastructure Define and track Service Level Objectives (SLOs), Service Level Indicators (SLIs), and error budgets Conduct capacity planning and performance optimization for critical systems Implement strategies to improve system resilience and fault tolerance Perform regular system health checks and proactive maintenance Monitoring and Observability Deploy and maintain comprehensive monitoring solutions using the ELK stack and other tools Create and refine dashboards for system metrics, logs, and application performance Set up effective alerting systems with appropriate thresholds to minimize alert fatigue Implement distributed tracing to understand system behavior and identify bottlenecks Ensure proper logging and telemetry across all services Incident Management and Response Lead incident response efforts, including troubleshooting, mitigation, and resolution Conduct thorough post-incident reviews to identify root causes and preventive measures Document incidents, resolutions, and knowledge for future reference Develop and maintain runbooks for common operational procedures Participate in on-call rotation to provide 24/7 coverage for critical systems Automation and Toil Reduction Identify and eliminate toil through systematic automation Develop automated solutions for recurring operational tasks Implement Infrastructure as Code (IaC) practices for consistent environment provisioning Create self-service tools for developers to reduce operational dependencies Automate testing and deployment processes for improved efficiency CI/CD Pipeline Management Design and maintain reliable CI/CD pipelines for continuous deployment Implement automated testing within deployment workflows Ensure smooth and reliable deployment processes with minimal disruption Develop strategies for canary deployments and feature flagging Create rollback mechanisms for quick recovery from failed deployments Infrastructure Management Manage Kubernetes clusters and containerized applications Oversee configuration management and version control for infrastructure Implement security best practices and compliance requirements Optimize resource utilization and cost efficiency Ensure proper backup and disaster recovery procedures Collaboration and Knowledge Sharing Work closely with development teams to improve application reliability Provide guidance on architectural decisions from a reliability perspective Conduct regular knowledge sharing sessions and documentation updates Train team members on SRE practices and tools Contribute to the development of SRE culture across the organization Working Environment Collaborative team environment focused on continuous improvement Opportunity to work with cutting-edge technologies and solve complex problems Balance of project work and operational responsibilities Culture that values automation, innovation, and reliability Emphasis on learning and professional development Success Metrics Improvement in system availability and reliability metrics Reduction in mean time to detect (MTTD) and mean time to resolve (MTTR) incidents Decreased frequency of production incidents and outages Increased automation coverage and reduced manual operational work Successful implementation of SLOs and monitoring systems Positive feedback from development teams on collaboration and support
Posted 1 week ago
1.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As an Associate Manager - Data IntegrationOps, you will play a crucial role in supporting and managing data integration and operations programs within our data organization. Your responsibilities will involve maintaining and optimizing data integration workflows, ensuring data reliability, and supporting operational excellence. To succeed in this position, you will need a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Your primary duties will include assisting in the management of Data IntegrationOps programs, aligning them with business objectives, data governance standards, and enterprise data strategies. You will also be involved in monitoring and enhancing data integration platforms through real-time monitoring, automated alerting, and self-healing capabilities to improve uptime and system performance. Additionally, you will help develop and enforce data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Collaboration with cross-functional teams will be essential to optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. You will also contribute to promoting a data-first culture by aligning with PepsiCo's Data & Analytics program and supporting global data engineering efforts across sectors. Continuous improvement initiatives will be part of your responsibilities to enhance the reliability, scalability, and efficiency of data integration processes. Furthermore, you will be involved in supporting data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Developing API-driven data integration solutions using REST APIs and Kafka, deploying and managing cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, and participating in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins will also be part of your role. Your qualifications should include at least 9 years of technology work experience in a large-scale, global organization, preferably in the CPG (Consumer Packaged Goods) industry. You should also have 4+ years of experience in Data Integration, Data Operations, and Analytics, as well as experience working in cross-functional IT organizations. Leadership/management experience supporting technical teams and hands-on experience in monitoring and supporting SAP BW processes are also required qualifications for this role. In summary, as an Associate Manager - Data IntegrationOps, you will be responsible for supporting and managing data integration and operations programs, collaborating with cross-functional teams, and ensuring the efficiency and reliability of data integration processes. Your expertise in enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support will be key to your success in this role.,
Posted 1 week ago
5.0 - 10.0 years
8 - 17 Lacs
Pune
Work from Office
Greetings from Searce! Position - Sr Cloud Reliability Engineer (SRE) | GCP Cloud Location - Mumbai/Pune Experience - 5+ years Overview about the role. As a Site Reliability Engineer (SRE) in the Cloud Managed Services team at Searce, you play a pivotal role in ensuring the reliability, scalability, and performance of our cloud-based infrastructure. You'll be at the forefront of managing and optimizing cloud services to deliver high-quality and resilient solutions. Key Responsibilities: Responsibilities: Lead and manage the Cloud Reliability teams to provide strong Managed Services support to end-customers. Isolate, troubleshoot and resolve issues reported by CMS clients in their cloud environment. Drive the communication with the customer providing details about the issue, current steps, next plan of action, ETA. Gather client's requirements related to use of specific cloud services and provide assistance in setting them up and resolving issues. Create SOPs and knowledge articles for use by the L1 teams to resolve common issues. Identify recurring issues, perform root cause analysis and propose/implement preventive actions. Follow change management procedure to identify, record and implement changes. Plan and deploy OS, security patches in Windows/Linux environment and upgrade k8s clusters. Identify the recurring manual activities and contribute to automation. Provide technical guidance and educate team members on development and operations. Monitor metrics and develop ways to improve. System troubleshooting and problem-solving across platform and application domains. Ability to use a wide variety of open-source technologies and cloud services. Build, maintain, and monitor configuration standards. Ensuring critical system security through using best-in-class cloud security solutions. Qualifications: 5+ years of experience in a similar role Bachelor's degree or the equivalent combination of education and experience. Strong organizational and communication skills Strong ability to multitask Comfort working with multiple groups within business Why Searce? Joining Searce's Cloud Managed Services team means being part of a dynamic and collaborative environment where innovation and reliability are at the core. As an SRE, you'll have the opportunity to work with cutting-edge technologies and contribute to the success of cloud-based solutions for our clients.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You are a detail-oriented and proactive Associate Manager - BIOps Program Management responsible for supporting and optimizing Business Intelligence Operations (BIOps) programs. Your role involves leveraging your expertise in BI governance, data analytics, cloud-based BI platforms, automation, and operational processes to implement scalable BIOps strategies, enhance BI platform performance, and ensure the availability, reliability, and efficiency of enterprise analytics solutions. Your responsibilities include managing and maintaining BIOps programs to align with business objectives, data governance standards, and enterprise data strategies. You will contribute to implementing real-time monitoring, automated alerting, and self-healing capabilities to improve BI platform uptime and performance. Furthermore, you will support the development and enforcement of BI governance models, operational frameworks, and execution roadmaps for seamless BI delivery. Collaborating closely with cross-functional teams such as Data Engineering, Analytics, AI/ML, CloudOps, and DataOps, you will execute Data & Analytics platform strategies to foster a data-first culture. You will provide operational support for PepsiCo's Data & Analytics program and platform management to ensure consistency with global data initiatives. Additionally, you will assist in enabling proactive issue identification, self-healing capabilities, and continuous platform sustainment across the PepsiCo Data Estate. Your role also involves ensuring high availability and optimal performance of BI tools like Power BI, Tableau, SAP BO, and MicroStrategy. You will contribute to real-time observability, monitoring, and incident management processes to maintain system efficiency and minimize downtime. Working closely with various teams, you will optimize data models, enhance report performance, and support data-driven decision-making. To excel in this role, you should possess 7+ years of technology work experience in a large-scale global organization, preferably in the CPG industry. Additionally, you should have 7+ years of experience in the Data & Analytics field, exposure to BI operations and tools, and 4+ years of experience in a leadership or team coordination role. Your ability to empathize with customers, prioritize their needs, and advocate for timely resolutions will be crucial. Furthermore, your passion for delivering excellent customer experiences, fostering a customer-first culture, and willingness to learn new skills and technologies will drive your success in this dynamic environment. Your strong interpersonal skills, ability to analyze complex issues, build cross-functional relationships, and achieve results in fast-paced environments will be essential. Your familiarity with cloud infrastructure, BI platforms, and modern site reliability practices will enable you to support operational requirements effectively. By leveraging your expertise and collaborating with stakeholders, you will contribute to the operational excellence of BI solutions and enhance system performance. Overall, your role as an Associate Manager - BIOps Program Management will involve supporting and optimizing BIOps programs, enhancing BI platform performance, and ensuring the availability, reliability, and efficiency of enterprise analytics solutions. Your proactive approach, technical expertise, and collaboration with cross-functional teams will be instrumental in driving operational excellence and fostering a data-first culture within PepsiCo.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
You are a detail-oriented and proactive Associate Manager - BIOps Program Management who will be responsible for supporting and optimizing Business Intelligence Operations (BIOps) programs. Your role will involve implementing scalable strategies, improving BI platform performance, and ensuring the availability, reliability, and efficiency of enterprise analytics solutions. You will assist in managing and maintaining BIOps programs to ensure alignment with business objectives, data governance standards, and enterprise data strategies. Additionally, you will contribute to the implementation of real-time monitoring, automated alerting, and self-healing capabilities to enhance BI platform uptime and performance. Your responsibilities will include supporting the development and enforcement of BI governance models, operational frameworks, and execution roadmaps for seamless BI delivery. You will also assist in standardizing and automating BI pipeline workflows, report generation, and dashboard refresh processes to improve operational efficiency. Collaboration with cross-functional teams, including Data Engineering, Analytics, AI/ML, CloudOps, and DataOps, will be crucial to executing Data & Analytics platform strategies and fostering a data-first culture. You will provide operational support for PepsiCo's Data & Analytics program and platform management to ensure consistency with global data initiatives. Your role will also involve ensuring high availability and optimal performance of BI tools such as Power BI, Tableau, SAP BO, and MicroStrategy. You will contribute to real-time observability, monitoring, and incident management processes to maintain system efficiency and minimize downtime. Working closely with various teams, you will support data-driven decision-making efforts and coordinate with IT, business leaders, and compliance teams to ensure BIOps processes align with regulatory and security requirements. Furthermore, you will provide periodic updates on operational performance, risk assessments, and BIOps maturity progress to relevant stakeholders. You will support end-to-end BI operations, maintain service-level agreements (SLAs), engage with subject matter experts (SMEs), and contribute to developing and maintaining operational policies, structured processes, and automation to enhance operational efficiency. Your qualifications should include 7+ years of technology work experience in a large-scale global organization, 7+ years of experience in the Data & Analytics field, exposure to BI operations and tools, and experience working within a cross-functional IT organization. Additionally, you should have 4+ years of experience in a leadership or team coordination role, the ability to empathize with customers, prioritize customer needs, and advocate for timely resolutions, among other skills and qualities mentioned in the job description.,
Posted 1 week ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, Were hiring a Cloud Network Engineer to design and manage secure, performant cloud networks. Key Responsibilities: Design VPCs, subnets, and routing policies. Configure load balancers, firewalls, and VPNs. Optimize traffic flow and network security. Required Skills & Qualifications: Experience with cloud networking in AWS/Azure/GCP. Understanding of TCP/IP, DNS, VPNs. Familiarity with tools like Palo Alto, Cisco, or Fortinet. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 1 week ago
10.0 - 20.0 years
35 - 50 Lacs
Pune
Hybrid
Principal Cloud Cost Optimization Engineer Experience: 10-20 Years Exp Salary: Competitive Preferred Notice Period : Within 60 Days Opportunity Type: Hybrid(Pune) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have skills: FinOps OR Cloud Cost Optimization OR Cloud financial management OR Cloud FinOps OR Cloud financial operations OR FinOps Engineer AND GCP OR AWS OR OCI AND cloud cost reporting OR Cloud spend OR Cost dashboards OR cost analysis Perforce Software (One of Uplers' Clients) is looking for: About Perforce Software: Perforce is a community of collaborative experts, problem solvers, and possibility seekers who believe work should be both challenging and fun. We are proud to inspire creativity, foster belonging, support collaboration, and encourage wellness. At Perforce, youll work with and learn from some of the best and brightest in business. Before you know it, youll be in the middle of a rewarding career at a company headed in one direction: upward. With aglobal footprint spanning more than 80 countries and includingover 75% of the Fortune 100, Perforce Software, Inc. is trusted by the worlds leading brands to deliver solutions for the toughest challenges. The best run DevOps teams in the world choose Perforce. Job Description Position Summary: Sr Director of Cloud Operation at Perforce is searching for Principal FinOps Engineer I to build a next-generation FinOps platform that supports Perforces SaaS environments, with a focus on cost efficiency, visibility, and financial accountability across multi-cloud production and CI/CD pipelines. In this role, youll implement automated tools and practices that drive cloud cost optimization, reporting, and governance. You'll collaborate with Cloud Engineering, DevOps, and Finance teams to integrate FinOps best practices into our cloud operations, enabling forecasting, budget tracking, and spend transparency while ensuring reliability and scalability of our services. Responsibilities: Lead and evolve the organization's FinOps strategy and practices across multi-cloud environments (AWS, Azure, GCP, IBM Cloud, OCI) and various SaaS platforms (e.g., MongoDB, DataDog). Work with Cloud & Product Vendors, Engineering, Finance, IT, and Accounts Payable teams to track cloud spending and align it with revenue. Analyze cloud cost and usage data to identify optimization opportunities, reduce waste and ensure alignment with budget and business goals. Oversee the management and tracking of Savings Plans and Reserved Instances (RIs) including their coverage, utilization, and recommendations for adjustments. Maintain & Automate the monthly cloud cost reporting dashboards and periodic reviews with stakeholders to drive visibility and transparency. The candidate shall have a thorough understanding of the FOCUS standard, including its data schema, cost and usage data normalization, and integration across multi-cloud platforms. As a strategic FinOps SME, the candidate shall contribute to optimizing existing in-house cost reporting tools, aligning them with industry best practices and the organizations evolving enterprise needs. They must be capable of mapping provider-specific cost and usage reports (AWS, Azure, GCP, OCI, IBM Cloud) to the FOCUS schema to enable reporting & cost allocation. Work closely with Cloud Architects and DevOps engineers to align with FinOps goals. Leverage FinOps tools (e.g.AWS Cost Explorer, Azure Cost Management) to automate reporting and governance. Own forecasting and budgeting activities for cloud expenditure and ensure adjustments in timely manner. Promote best practices in cloud governance, tagging strategy and chargeback. Mentor cross-functional teams on FinOps principles. Bonus: FinOps Practitioner Certification (Certified Cloud Financial Management Professional (CCFMP), AWS Certified FinOps Professional, Google Cloud Certified - Professional Cloud Financial Manager, Microsoft Certified: Azure FinOps Engineer: etc.) Requirements: Bachelors or Master’s degree in Computer Science or IT Engineering, or related field. 10+ years of experience in Cloud Engineering/Operations with a minimum of 5 years in FinOps role. Deep knowledge of cloud provider cost structures, pricing models, and billing mechanisms (AWS, Azure, GCP, OCI, IBM Cloud). Proven expertise in Savings Plans, Reserved Instances, and other cloud cost optimization opportutnities. Strong experience with FinOps tools such as AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis, GCP Billing reports etc. Familiarity with DevOps and Cloud Operations frameworks and how FinOps integrates into CICD and Infrastructure as Code (IaC) processes. Exceptional analytical skills with ability to interpret large datasets and generate actionable insights. Excellent communication, collaboration, and persuasion skills and capable of working with both technical and finance stakeholders. Hands-on experience building automated cost reports, dashboards, and budget tracking mechanisms. Ability to work independently and collaborate effectively with cross-functional teams. in a fast-paced environment. How to apply for this opportunity? Easy 3-Step Process: Click on Apply and register or log in to our portal Upload updated Resume & complete the Screening Form Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
10.0 - 20.0 years
35 - 50 Lacs
Pune
Hybrid
Principal FinOps Engineer Experience: 10-20 Years Exp Salary: Competitive Preferred Notice Period : Within 60 Days Opportunity Type: Hybrid(Pune) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have skills: FinOps OR Cloud Cost Optimization OR Cloud financial management OR Cloud FinOps OR Cloud financial operations OR FinOps Engineer AND GCP OR AWS OR OCI AND cloud cost reporting OR Cloud spend OR Cost dashboards OR cost analysis Perforce Software (One of Uplers' Clients) is looking for: About Perforce Software: Perforce is a community of collaborative experts, problem solvers, and possibility seekers who believe work should be both challenging and fun. We are proud to inspire creativity, foster belonging, support collaboration, and encourage wellness. At Perforce, youll work with and learn from some of the best and brightest in business. Before you know it, youll be in the middle of a rewarding career at a company headed in one direction: upward. With aglobal footprint spanning more than 80 countries and includingover 75% of the Fortune 100, Perforce Software, Inc. is trusted by the worlds leading brands to deliver solutions for the toughest challenges. The best run DevOps teams in the world choose Perforce. Job Description Position Summary: Sr Director of Cloud Operation at Perforce is searching for Principal FinOps Engineer I to build a next-generation FinOps platform that supports Perforces SaaS environments, with a focus on cost efficiency, visibility, and financial accountability across multi-cloud production and CI/CD pipelines. In this role, youll implement automated tools and practices that drive cloud cost optimization, reporting, and governance. You'll collaborate with Cloud Engineering, DevOps, and Finance teams to integrate FinOps best practices into our cloud operations, enabling forecasting, budget tracking, and spend transparency while ensuring reliability and scalability of our services. Responsibilities: Lead and evolve the organization's FinOps strategy and practices across multi-cloud environments (AWS, Azure, GCP, IBM Cloud, OCI) and various SaaS platforms (e.g., MongoDB, DataDog). Work with Cloud & Product Vendors, Engineering, Finance, IT, and Accounts Payable teams to track cloud spending and align it with revenue. Analyze cloud cost and usage data to identify optimization opportunities, reduce waste and ensure alignment with budget and business goals. Oversee the management and tracking of Savings Plans and Reserved Instances (RIs) including their coverage, utilization, and recommendations for adjustments. Maintain & Automate the monthly cloud cost reporting dashboards and periodic reviews with stakeholders to drive visibility and transparency. The candidate shall have a thorough understanding of the FOCUS standard, including its data schema, cost and usage data normalization, and integration across multi-cloud platforms. As a strategic FinOps SME, the candidate shall contribute to optimizing existing in-house cost reporting tools, aligning them with industry best practices and the organization’s evolving enterprise needs. They must be capable of mapping provider-specific cost and usage reports (AWS, Azure, GCP, OCI, IBM Cloud) to the FOCUS schema to enable reporting & cost allocation. Work closely with Cloud Architects and DevOps engineers to align with FinOps goals. Leverage FinOps tools (e.g.AWS Cost Explorer, Azure Cost Management) to automate reporting and governance. Own forecasting and budgeting activities for cloud expenditure and ensure adjustments in timely manner. Promote best practices in cloud governance, tagging strategy and chargeback. Mentor cross-functional teams on FinOps principles. Bonus: FinOps Practitioner Certification (Certified Cloud Financial Management Professional (CCFMP), AWS Certified FinOps Professional, Google Cloud Certified - Professional Cloud Financial Manager, Microsoft Certified: Azure FinOps Engineer: etc.) Requirements: Bachelor’s or Master’s degree in Computer Science or IT Engineering, or related field. 10+ years of experience in Cloud Engineering/Operations with a minimum of 5 years in FinOps role. Deep knowledge of cloud provider cost structures, pricing models, and billing mechanisms (AWS, Azure, GCP, OCI, IBM Cloud). Proven expertise in Savings Plans, Reserved Instances, and other cloud cost optimization opportutnities. Strong experience with FinOps tools such as AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis, GCP Billing reports etc. Familiarity with DevOps and Cloud Operations frameworks and how FinOps integrates into CICD and Infrastructure as Code (IaC) processes. Exceptional analytical skills with ability to interpret large datasets and generate actionable insights. Excellent communication, collaboration, and persuasion skills and capable of working with both technical and finance stakeholders. Hands-on experience building automated cost reports, dashboards, and budget tracking mechanisms. Ability to work independently and collaborate effectively with cross-functional teams. in a fast-paced environment. How to apply for this opportunity? Easy 3-Step Process: Click on Apply and register or log in to our portal Upload updated Resume & complete the Screening Form Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
10.0 - 20.0 years
35 - 50 Lacs
Pune
Hybrid
Principal FinOps Engineer Experience: 10-20 Years Exp Salary: Competitive Preferred Notice Period : Within 60 Days Opportunity Type: Hybrid(Pune) Placement Type: Full-time (*Note: This is a requirement for one of Uplers' Clients) Must have skills: FinOps OR Cloud Cost Optimization OR Cloud financial management OR Cloud FinOps OR Cloud financial operations OR FinOps Engineer AND GCP OR AWS OR OCI AND cloud cost reporting OR Cloud spend OR Cost dashboards OR cost analysis Perforce Software (One of Uplers' Clients) is looking for: About Perforce Software: Perforce is a community of collaborative experts, problem solvers, and possibility seekers who believe work should be both challenging and fun. We are proud to inspire creativity, foster belonging, support collaboration, and encourage wellness. At Perforce, youll work with and learn from some of the best and brightest in business. Before you know it, youll be in the middle of a rewarding career at a company headed in one direction: upward. With aglobal footprint spanning more than 80 countries and includingover 75% of the Fortune 100, Perforce Software, Inc. is trusted by the worlds leading brands to deliver solutions for the toughest challenges. The best run DevOps teams in the world choose Perforce. Job Description Position Summary: Sr Director of Cloud Operation at Perforce is searching for Principal FinOps Engineer I to build a next-generation FinOps platform that supports Perforces SaaS environments, with a focus on cost efficiency, visibility, and financial accountability across multi-cloud production and CI/CD pipelines. In this role, you’ll implement automated tools and practices that drive cloud cost optimization, reporting, and governance. You'll collaborate with Cloud Engineering, DevOps, and Finance teams to integrate FinOps best practices into our cloud operations, enabling forecasting, budget tracking, and spend transparency while ensuring reliability and scalability of our services. Responsibilities: Lead and evolve the organization's FinOps strategy and practices across multi-cloud environments (AWS, Azure, GCP, IBM Cloud, OCI) and various SaaS platforms (e.g., MongoDB, DataDog). Work with Cloud & Product Vendors, Engineering, Finance, IT, and Accounts Payable teams to track cloud spending and align it with revenue. Analyze cloud cost and usage data to identify optimization opportunities, reduce waste and ensure alignment with budget and business goals. Oversee the management and tracking of Savings Plans and Reserved Instances (RIs) including their coverage, utilization, and recommendations for adjustments. Maintain & Automate the monthly cloud cost reporting dashboards and periodic reviews with stakeholders to drive visibility and transparency. The candidate shall have a thorough understanding of the FOCUS standard, including its data schema, cost and usage data normalization, and integration across multi-cloud platforms. As a strategic FinOps SME, the candidate shall contribute to optimizing existing in-house cost reporting tools, aligning them with industry best practices and the organization’s evolving enterprise needs. They must be capable of mapping provider-specific cost and usage reports (AWS, Azure, GCP, OCI, IBM Cloud) to the FOCUS schema to enable reporting & cost allocation. Work closely with Cloud Architects and DevOps engineers to align with FinOps goals. Leverage FinOps tools (e.g.AWS Cost Explorer, Azure Cost Management) to automate reporting and governance. Own forecasting and budgeting activities for cloud expenditure and ensure adjustments in timely manner. Promote best practices in cloud governance, tagging strategy and chargeback. Mentor cross-functional teams on FinOps principles. Bonus: FinOps Practitioner Certification (Certified Cloud Financial Management Professional (CCFMP), AWS Certified FinOps Professional, Google Cloud Certified - Professional Cloud Financial Manager, Microsoft Certified: Azure FinOps Engineer: etc.) Requirements: Bachelor’s or Master’s degree in Computer Science or IT Engineering, or related field. 10+ years of experience in Cloud Engineering/Operations with a minimum of 5 years in FinOps role. Deep knowledge of cloud provider cost structures, pricing models, and billing mechanisms (AWS, Azure, GCP, OCI, IBM Cloud). Proven expertise in Savings Plans, Reserved Instances, and other cloud cost optimization opportutnities. Strong experience with FinOps tools such as AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis, GCP Billing reports etc. Familiarity with DevOps and Cloud Operations frameworks and how FinOps integrates into CICD and Infrastructure as Code (IaC) processes. Exceptional analytical skills with ability to interpret large datasets and generate actionable insights. Excellent communication, collaboration, and persuasion skills and capable of working with both technical and finance stakeholders. Hands-on experience building automated cost reports, dashboards, and budget tracking mechanisms. Ability to work independently and collaborate effectively with cross-functional teams. in a fast-paced environment. How to apply for this opportunity? Easy 3-Step Process: Click on Apply and register or log in to our portal Upload updated Resume & complete the Screening Form Increase your chances of getting shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
9.0 - 14.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Associate Manager, D&AI Data IntegrationOps (ECG, TIBCO, KAFKA - Sustain) Overview We are seeking an Associate Manager Data IntegrationOps to support and assist in managing data integration and operations (IntegrationOps) programs within our growing data organization. In this role, you will help maintain and optimize data integration workflows, ensure data reliability, and support operational excellence. This position requires a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Support the management of Data IntegrationOps programs by assisting in aligning with business objectives, data governance standards, and enterprise data strategies. Monitor and enhance data integration platforms by implementing real-time monitoring, automated alerting, and self-healing capabilities to help improve uptime and system performance under the guidance of senior team members. Assist in developing and enforcing data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Support the standardization and automation of data integration workflows, including report generation and dashboard refreshes. Collaborate with cross-functional teams to help optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. Provide assistance in Data & Analytics technology transformations by supporting full sustainment capabilities, including data platform management and proactive issue identification with automated solutions. Contribute to promoting a data-first culture by aligning with PepsiCos Data & Analytics program and supporting global data engineering efforts across sectors. Support continuous improvement initiatives to help enhance the reliability, scalability, and efficiency of data integration processes. Engage with business and IT teams to help identify operational challenges and provide solutions that align with the organizations data strategy. Develop technical expertise in ETL/ELT processes, cloud-based data platforms, and API-driven data integration, working closely with senior team members. Assist with monitoring, incident management, and troubleshooting in a data operations environment to ensure smooth daily operations. Support the implementation of sustainable solutions for operational challenges by helping analyze root causes and recommending improvements. Foster strong communication and collaboration skills, contributing to effective engagement with cross-functional teams and stakeholders. Demonstrate a passion for continuous learning and adapting to emerging technologies in data integration and operations. Responsibilities Support and maintain data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Assist in developing API-driven data integration solutions using REST APIs and Kafka to ensure seamless data movement across platforms. Contribute to the deployment and management of cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, working closely with the team. Help automate data pipelines and participate in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins. Monitor system reliability using observability tools such as Splunk, Grafana, Prometheus, and other custom monitoring solutions, reporting issues as needed. Assist in end-to-end data integration operations by testing and monitoring processes to maintain service quality and support global products and projects. Support the day-to-day operations of data products, ensuring SLAs are met and assisting in collaboration with SMEs to fulfill business demands. Support incident management processes, helping to resolve service outages and ensuring the timely resolution of critical issues. Assist in developing and maintaining operational processes to enhance system efficiency and resilience through automation. Collaborate with cross-functional teams like Data Engineering, Analytics, AI/ML, CloudOps, and DataOps to improve data reliability and contribute to data-driven decision-making. Work closely with teams to troubleshoot and resolve issues related to cloud infrastructure and data services, escalating to senior team members as necessary. Support building and maintaining relationships with internal stakeholders to align data integration operations with business objectives. Engage directly with customers, actively listening to their concerns, addressing challenges, and helping set clear expectations. Promote a customer-centric approach by contributing to efforts that enhance the customer experience and empower the team to advocate for customer needs. Assist in incorporating customer feedback and business priorities into operational processes to ensure continuous improvement. Contribute to the work intake and Agile processes for data platform teams, ensuring operational excellence through collaboration and continuous feedback. Support the execution of Agile frameworks, helping drive a culture of adaptability, efficiency, and learning within the team. Help align the team with a shared vision, ensuring a collaborative approach while contributing to a culture of accountability. Mentor junior technical team members, supporting their growth and ensuring adherence to best practices in data integration. Contribute to resource planning by helping assess team capacity and ensuring alignment with business objectives. Remove productivity barriers in an agile environment, assisting the team to shift priorities as needed without compromising quality. Support continuous improvement in data integration processes by helping evaluate and suggest optimizations to enhance system performance. Leverage technical expertise in cloud and computing technologies to support business goals and drive operational success. Stay informed on emerging trends and technologies, helping bring innovative ideas to the team and supporting ongoing improvements in data operations. Qualifications 9+ years of technology work experience in a large-scale, global organization CPG (Consumer Packaged Goods) industry preferred. 4+ years of experience in Data Integration, Data Operations, and Analytics, supporting and maintaining enterprise data platforms. 4+ years of experience working in cross-functional IT organizations, collaborating with teams such as Data Engineering, CloudOps, DevOps, and Analytics. 1+ years of leadership/management experience supporting technical teams and contributing to operational efficiency initiatives. 4+ years of hands-on experience in monitoring and supporting SAP BW processes for data extraction, transformation, and loading (ETL). Managing Process Chains and Batch Jobs to ensure smooth data load operations and identifying failures for quick resolution. Debugging and troubleshooting data load failures and performance bottlenecks in SAP BW systems. Validating data consistency and integrity between source systems and BW targets. Strong understanding of SAP BW architecture, InfoProviders, DSOs, Cubes, and MultiProviders. Knowledge of SAP BW process chains and event-based triggers to manage and optimize data loads. Exposure to SAP BW on HANA and knowledge of SAPs modern data platforms. Basic knowledge of integrating SAP BW with other ETL/ELT tools like Informatica IICS, PowerCenter, DDH, and Azure Data Factory. Knowledge of ETL/ELT tools such as Informatica IICS, PowerCenter, Teradata, and Azure Data Factory. Hands-on knowledge of cloud-based data integration platforms such as Azure Data Services, AWS Redshift, Snowflake, and Google BigQuery. Familiarity with API-driven data integration (e.g., REST APIs, Kafka), and supporting cloud-based data pipelines. Basic proficiency in Infrastructure-as-Code (IaC) tools such as Terraform, GitOps, Kubernetes, and Jenkins for automating infrastructure management. Understanding of Site Reliability Engineering (SRE) principles, with a focus on proactive monitoring and process improvements. Strong communication skills, with the ability to explain technical concepts clearly to both technical and non-technical stakeholders. Ability to effectively advocate for customer needs and collaborate with teams to ensure alignment between business and technical solutions. Interpersonal skills to help build relationships with stakeholders across both business and IT teams. Customer Obsession: Enthusiastic about ensuring high-quality customer experiences and continuously addressing customer needs. Ownership Mindset: Willingness to take responsibility for issues and drive timely resolutions while maintaining service quality. Ability to support and improve operational efficiency in large-scale, mission-critical systems. Some experience leading or supporting technical teams in a cloud-based environment, ideally within Microsoft Azure. Able to deliver operational services in fast-paced, transformation-driven environments. Proven capability in balancing business and IT priorities, executing solutions that drive mutually beneficial outcomes. Basic experience with Agile methodologies, and an ability to collaborate effectively across virtual teams and different functions. Understanding of master data management (MDM), data standards, and familiarity with data governance and analytics concepts. Openness to learning new technologies, tools, and methodologies to stay current in the rapidly evolving data space. Passion for continuous improvement and keeping up with trends in data integration and cloud technologies.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 17 Lacs
Pune, Mumbai (All Areas)
Work from Office
Greetings from Searce! Position - Sr Cloud Reliability Engineer (SRE) | GCP Cloud Location - Mumbai/Pune Experience - 5+ years Overview about the role. As a Site Reliability Engineer (SRE) in the Cloud Managed Services team at Searce, you play a pivotal role in ensuring the reliability, scalability, and performance of our cloud-based infrastructure. You'll be at the forefront of managing and optimizing cloud services to deliver high-quality and resilient solutions. Key Responsibilities: Responsibilities: Lead and manage the Cloud Reliability teams to provide strong Managed Services support to end-customers. Isolate, troubleshoot and resolve issues reported by CMS clients in their cloud environment. Drive the communication with the customer providing details about the issue, current steps, next plan of action, ETA. Gather client's requirements related to use of specific cloud services and provide assistance in setting them up and resolving issues. Create SOPs and knowledge articles for use by the L1 teams to resolve common issues. Identify recurring issues, perform root cause analysis and propose/implement preventive actions. Follow change management procedure to identify, record and implement changes. Plan and deploy OS, security patches in Windows/Linux environment and upgrade k8s clusters. Identify the recurring manual activities and contribute to automation. Provide technical guidance and educate team members on development and operations. Monitor metrics and develop ways to improve. System troubleshooting and problem-solving across platform and application domains. Ability to use a wide variety of open-source technologies and cloud services. Build, maintain, and monitor configuration standards. Ensuring critical system security through using best-in-class cloud security solutions. Qualifications: 5+ years of experience in a similar role Bachelor's degree or the equivalent combination of education and experience. Strong organizational and communication skills Strong ability to multitask Comfort working with multiple groups within business Why Searce? Joining Searce's Cloud Managed Services team means being part of a dynamic and collaborative environment where innovation and reliability are at the core. As an SRE, you'll have the opportunity to work with cutting-edge technologies and contribute to the success of cloud-based solutions for our clients.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 14 Lacs
Pune
Work from Office
As a Site Reliability Engineer, you will work in an agile, collaborative environment to build, deploy, configure, and maintain systems for the IBM client business. In this role, you will lead the problem resolution process for our clients, from analysis and troubleshooting, to deploying the latest software updates & fixes. Your primary responsibilities include: 24x7 Observability: Be part of a worldwide team that monitors the health of production systems and services around the clock, ensuring continuous reliability and optimal customer experience. Cross-Functional Troubleshooting: Collaborate with engineering teams to provide initial assessments and possible workarounds for production issues. Troubleshoot and resolve production issues effectively. Deployment and Configuration: Leverage Continuous Delivery (CI/CD) tools to deploy services and configuration changes at enterprise scale. Security and Compliance Implementation: Implementing security measures that meet or exceed industry standards for regulations such as GDPR, SOC2, ISO 27001, PCI, HIPAA, and FBA. Maintenance and Support: Tasks related to applying Couchbase security patches and upgrades, supporting Cassandra and Mongo for pager duty rotation, and collaborating with Couchbase Product support for issue resolution. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Bachelor’s degree in Computer Science, IT, or equivalent. 5+ years of experience in any database either Netezza, Db2 or MSSQL etc. 5+ years of experience in DevOps, CloudOps, or SRE roles. Foundational experience with Linux/Unix systems. Hands-on exposure to cloud platforms (IKS, AWS, or Azure). Understanding of networking and databases. Strong troubleshooting and problem-solving skills. Preferred technical and professional experience Databases :Strongly preferred experience in working with Netezza/Db2 databases Adminstration. Monitor and optimize DB performance and reliability. Configure and troubleshoot database issues Kubernetes/OpenShift: Strongly preferred experience in working with production Kubernetes/OpenShift environments. Automation/Scripting: In depth experience with the Ansible, Python, Terraform, and CI/CD tools such as Jenkins, IBM Continuous Delivery, ArgoCD Monitoring/Observability: Hands on experience crafting alerts and dashboards using tools such as Instana, New Relic, Grafana/Prometheus
Posted 2 weeks ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
We are currently seeking a skilled System Administrator with over 10 years of experience in Linux/Windows environments and cloud migration (AWS & Azure) to join our team in Pune. In this role, you will be responsible for managing enterprise infrastructure, automating deployments, and leading cloud transformation projects. The ideal candidate should have the following key skills: - Proven expertise in Linux distributions such as RedHat, Ubuntu, and CentOS, as well as Windows Server - Deep understanding of Active Directory, Group Policies, and replication tools like Repadmin and Dcdiag - Experience in cloud migrations and infrastructure design on Azure/AWS - Strong knowledge of virtualization technologies such as VMware and KVM, along with scripting/automation tools like Ansible and Terraform - Day-to-day management of virtual machines including patching, upgrades, troubleshooting, and backup - Familiarity with IIS, .NET, SFTP, DFS, clustering, and security best practices - Additional advantage if experienced in GitLab CI/CD, CloudOps, and enterprise-level cloud project delivery If you have a passion for system administration, cloud migration, and infrastructure design, and possess the required skills and experience, we would love to hear from you. Join us in this exciting opportunity to contribute to our team and drive innovation in cloud technologies. #SystemAdministrator #Linux #Windows #AWS #Azure #CloudMigration #Ansible #DevOps #Infrastructure #Terraform,
Posted 2 weeks ago
1.0 - 3.0 years
2 - 5 Lacs
Coimbatore
Work from Office
*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices
Posted 3 weeks ago
1.0 - 3.0 years
2 - 5 Lacs
Hyderabad
Work from Office
*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices
Posted 3 weeks ago
1.0 - 3.0 years
1 - 5 Lacs
Coimbatore
Work from Office
*Design, deploy, and manage cloud infrastructure solutions. *Monitor cloud-based systems to ensure performance, reliability, and security. *Automate processes to streamline operations and reduce manual tasks. Required Candidate profile Experience in Linux System Administration Exposure to AWS /Azure cloud platforms Knowledge of scripting languages like Python, Bash, or similar Understanding of CI/CD pipelines and DevOps practices
Posted 1 month ago
5.0 - 10.0 years
0 - 1 Lacs
Hyderabad
Work from Office
Hi, We have an immediate requirement for Cloud Operations Manager with our organization SHI Locuz Enterprise Solutions Pvt Ltd. PFB JD for your reference. JOB SUMMARY The Cloud Operations Manager is responsible for overseeing the day-to-day operations of an organization's cloud infrastructure and services. This role ensures the cloud environments are efficient, secure, scalable, and fully operational to meet the business and technological needs of the organization. The Cloud Operations Manager will collaborate with cross-functional teams to deliver high-quality services and drive improvements in cloud architecture, automation, and resource optimization. PRIMARY RESPONSIBILITIES Oversee and manage cloud operations to ensure seamless service delivery and optimized performance. Expertise in managing cloud infrastructure across major platforms (AWS, Azure, GCP). Proven experience in cloud operations, service management, and delivering high-quality cloud services on a scale. Coordinate and collaborate with cross-functional teams to implement best practices in cloud operations. Manage incident response, problem resolution, and ensure effective root cause analysis. Implement cloud automation and orchestration processes to streamline operations and improve efficiency. Monitor cloud performance, security, and compliance, ensuring that SLAs and KPIs are consistently met. Lead and mentor cloud operations teams, fostering a culture of continuous improvement and innovation. Develop and maintain operational documentation, including runbooks, incident reports, and operational procedures. Familiarity with ITIL, DevOps, and Agile methodologies. Strong knowledge of cloud-native technologies, microservices, and containers (e.g., Kubernetes, Docker). Proficiency in scripting languages (e.g., Python, Bash) for automation and orchestration. SECONDARY RESPONSIBILITIES Ensure that capacity planning and disaster recovery procedures are in place for cloud infrastructure. Conduct regular backups, failover testing, and ensure business continuity. Maintain detailed documentation for cloud operations, configurations, and processes. Report on cloud usage, incidents, and performance to senior management. Stay up-to-date with the latest cloud technologies and trends. Recommend and implement new tools and technologies to improve cloud infrastructure and operations.
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Strong knowledge of AWS services, including but not limited to Hands on AWS networking skills (e.g. VPC, subnets, NACL, Transit Gateway, Route tables. Load Balancer, Direct Connect gateway, Route53, etc). Thorough understanding of networking concepts, especially TCPIP, IP addressing and subnet calculation. Solid experience with AWS Security services IAM (identity, resource, and service control policies, permission boundary, roles, federation, etc.), Security groups, KMS, ACM/ACM-PCA, Network Firewall, Config GuardDuty CloudTrail, secrets manager, systems manager (ssm) etc. Good knowledge of various AWS Integration patterns, lambda with amazon EventBridge, and SNS. Any workload-related experience is a bonus, e.g. EKS, ECS, Autoscaling, etc Containerisation experience with Docker and EKS (preferred) Infrastructure as a Code and scripting: Solid hands-on experience with declarative languages, Terraform (& Terragrunt preferred) and their capabilities Comfortable with bash scripting, and at least one programming language (Python or Golang preferred). Sound knowledge of secure coding practices, and configuration/secrets management Knowledge in writing unit and integration tests. Experience in writing infrastructure unit tests; Terratest preferred Solid understanding of CI/CD Solid understanding of zero-downtime deployment patterns Experience with automated continuous integration testing, including security testing using SAST tools Experience in automated CI/CD pipeline tooling; Codefresh preferred Experience in creating runners, docker images Experience using version control systems such as git Exposed to, and comfortable working on large source code repositories in a team environment. Solid expertise with Git and Git workflows, working within mid to large (infra) product development teams General / Infrastructure Experience Experience with cloud ops (DNS, Backups, cost optimisation, capacity management, monitoring/alerting, patch management, etc.) Exposure to complex application environments, including containerised as well as serverless applications Windows and/or Linux systems administration experience (preferred) Experience with Active Directory (preferred) Exposure to multi-cloud and hybrid infrastructure Exposure to large-scale on-premise to cloud infrastructure migrations Solid experience in working with mission-critical production systems
Posted 1 month ago
3.0 - 7.0 years
2 - 5 Lacs
Hyderabad
Work from Office
Pyspark SparkSQL SQL and Glue. ii. AWS cloud experience iii. Good understanding of dimensional modelling iv. Good understanding DevOps CloudOps DataOps CI/CD & with a SRE mindset v. Understanding of Lakehouse and DW architecture vi. strong analysis and analytical skills vii. understanding of version control systems specifically Git viii. strong in software engineering APIs Microservices etc. Soft skills i. written and oral communication skills ii. ability to translate business needs to system.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough