Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. Worley Digital At Worley, our Digital team collaborates closely with the business to deliver efficient, technology-enabled sustainable solutions, that will be transformational for Worley. This team, aptly named Worley Digital, is currently seeking talented individuals who would be working on a wide range of latest technologies, including solutions based on Automation, Generative AI. What drives us at Worley Digital? It’s our shared passion for pushing the boundaries of technological innovation, embracing best practices, and propelling Worley to the forefront of industry advancements. If you’re naturally curious, open-minded, and a self-motivated learner - one who’s ready to invest time and effort to stay future-ready - then Worley could be your ideal workplace. Major Accountabilities of Position: AI/ML Architect must have Defining, designing, and delivering ML architecture patterns operable in native and hybrid cloud architectures. Collaborate with Enterprise Architecture, Info Security, DevOps and Data Intelligence team to implement ML Solutions. Defining data augmentation pipelines for unstructured data like Documents, Engineering drawings etc. Build new network architecture in CNN/LSTM/RCNN or Develop wrapper for pre-trained models. Conduct feasibility of transfer learning fitment for given problem. Research, analyze, recommend, and select technical approaches to address challenging development and data integration problems related to ML Model training and deployment in Enterprise Applications. Perform research activities to identify emerging technologies (Generative AI) and trends that may affect the Data Science/ ML life-cycle management in enterprise application portfolio. Design and deploy AI/ML models in real-world environments and integrating AI/ML using Cloud native or hybrid technologies into large-scale enterprise applications .Demonstrated experience developing best practices and recommendations around tools/technologies for ML life-cycle capabilities such as Data collection, Data preparation, Feature Engineering, Model Management, ML Ops, Model Deployment approaches and Model monitoring and tuning. Knowledge / Experience / Competencies Required IT Skills & Experience (Priority wise): Hands-on programming and architecture capabilities in Python. Demonstrated technical expertise around architecting solutions around AI, ML, deep learning and Generative AI related technologies. Experience in implementing and deploying Machine Learning solutions (using various models, such as GPT-4, Lama2, Mistral ai, text embedding ada, Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Topic Modeling, Game Theory etc. ) Understanding of Nvidia Enterprise NEMO Suite. Expertise in popular deep learning frameworks, such as TensorFlow, PyTorch, and Keras, for building, training, and deploying neural network models. Experience in AI solution development with external SaaS products like Azure OCR Experience in the AI/ML components like Azure ML studio, Jupyter Hub, TensorFlow & Sci-Kit Learn Hands-on knowledge of API frameworks. Familiarity with the transformer architecture and its applications in natural language processing (NLP), such as machine translation, text summarization, and question-answering systems. 10. Expertise in designing and implementing CNNs for computer vision tasks, such as image classification, object detection, and semantic segmentation. Hands on experience in RDBMS, NoSQL, big data stores like: Elastic, Cassandra. Experience with open source software Experience using the cognitive APIs machine learning studios on cloud. Hands-on knowledge of image processing with deep learning ( CNN,RNN,LSTM,GAN) Familiarity with GPU computing and tools like CUDA and cu DNN to accelerate deep learning computations and reduce training times. Understanding of complete AI/ML project life cycle Understanding of data structures, data modelling and software architecture Good understanding of containerization and experience working with Docker, AKS. People Skills Clear and concise communication is vital for explaining complex machine learning concepts to non-technical stakeholders, presenting results, and collaborating with cross-functional teams. Ability to work independently and as part of a team. Being open to new ideas, embracing change, and adapting to evolving technologies and methodologies are crucial for staying relevant and effective in the rapidly changing field of machine learning. Cooperative mindset, flexibility, and the ability to work effectively in a team. Professional and open communication to all internal and external interfaces. Balancing multiple projects, prioritizing tasks, and meeting deadlines while maintaining a high standard of work requires effective time management and organizational skills. Accurately report to management in a timely and effective manner. Other Skills Outstanding analytical and problem-solving skills Education – Qualifications, Accreditation, Training Master’s in Information Technology / Big Data/Data Science/AI/Computer Science Minimum 4- Maximum 7 year experience as AI/ML Architect on AI and ML projects. Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-WB-Kolkata, IND-MM-Pune, IND-TN-Chennai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Head of Data Intelligence Duration of Contract 0
Posted 2 days ago
8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We’re Hiring: Team Lead – .NET Developer Locations: Bangalore | Coimbatore | Mysore Experience: 8+ Years | Work from Office Are you a seasoned .NET professional with a passion for both coding and leading teams? We’re looking for a Team Lead – .NET who thrives in a hands-on technical role while guiding and mentoring a team to success. What You’ll Bring: 1.Strong expertise in ASP.NET , MVC frameworks, and Entity Framework 2.Proficiency in C#, JavaScript, CSS, AJAX, and RESTful APIs 3.Experience with Azure and SQL Server (including stored procedures) 4.Solid understanding of .NET 3.5/4.0, Visual Studio, and client-side scripting 5.Proven ability to build scalable web applications using MVC/Razor 6.Leadership skills to manage and inspire a development team Interested? Send your CV to gayathri.j@aezion.com Join us to lead impactful projects, grow your career, and make a difference through technology.
Posted 2 days ago
4.0 - 6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. Team Guidance & Mentorship : Provide technical leadership and mentorship to a team of full-stack developers, fostering a collaborative and innovative environment. Project Management : Oversee project planning, tracking, and execution to ensure timely delivery of high-quality software solutions Code Reviews : Conduct regular code reviews to ensure adherence to best practices, coding standards, and performance optimization Technical Strategy : Collaborate with stakeholders to define technical strategies and architectures that align with business goals. Conflict Resolution : Address and resolve technical issues, team conflicts, and other challenges to maintain a productive workflow. Training & Development : Identify training needs and facilitate knowledge sharing sessions to keep the team updated with the latest technologies and methodologies. Coding : Write clean, scalable, and maintainable code in .NET and Angular to develop robust web applications. Feature Development: Implement new features and functionalities as per project requirements. Debugging & Troubleshooting : Identify, diagnose, and resolve complex technical issues in both backend and frontend components. Testing & Validation: Perform unit and integration testing to ensure the reliability and performance of the software. Documentation : Maintain comprehensive documentation for code, APIs, and system configurations. Continuous Improvement: Stay updated with emerging technologies and industry trends to continuously improve development practices. Knowledge / Experience / Competencies Required IT Skills & Experience: Backend Technologies Proficiency in .NET Core, C#, and ASP.NET. Experience With Entity Framework, LINQ, And Other ORM Frameworks. Knowledge of RESTful APIs, Web API, and SignalR. Experience with microservices architecture and containerization (Docker, Kubernetes). Understanding of asynchronous programming and concurrency. Frontend Technologies Strong experience with Angular, TypeScript, and JavaScript. Proficiency in HTML5, CSS3, SCSS, and responsive design. Familiarity with frontend build tools (Webpack, Angular CLI) and state management libraries (NgRx). Database Technologies Proficient in SQL Server, including T-SQL and stored procedures. Experience with NoSQL databases like MongoDB. Cloud Technologies Experience with cloud platforms such as Azure or AWS. Knowledge of cloud services like Azure Functions, App Services, and AWS Lambda. DevOps & CI/CD Familiarity with version control systems (e.g., Git) and CI/CD pipelines (Azure DevOps, Jenkins). Understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates. People Skills Excellent communication and interpersonal skills. Strong problem-solving abilities and attention to detail. Ability to work in a fast-paced, agile environment. Leadership qualities with a proactive and positive attitude. Preferred Qualifications Experience with cloud platforms such as Azure or AWS. Knowledge of microservices architecture and containerization (e.g., Docker, Kubernetes). Familiarity with Agile/Scrum methodologies. Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 4 to 6 years of experience with .NET and Angular, particularly with a minimum of 2 years in a leadership role. Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Mumbai Other Locations IND-KR-Bangalore, IND-MM-Pune, IND-TN-Chennai, IND-MM-Navi Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Manager
Posted 2 days ago
5.0 years
3 - 9 Lacs
Hyderābād
Remote
Senior Data Scientist Hyderabad, Telangana, India Date posted Jul 31, 2025 Job number 1854071 Work site Up to 100% work from home Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Data Science Employment type Full-Time Overview Azure is the fastest-growing business in Microsoft’s history and is the foundation of Microsoft’s commercial Cloud Services. We are a part of the Azure Core team that builds and manages the core platform across various services. We have an exciting opportunity for you to innovate and shape the world’s computers, and we encourage you to apply and learn more! As a Senior Data Scientist, you will lead high-impact research, build predictive models and will be responsible for collaborating to deliver the next generation of our cloud capacity management technologies, a critical investment area for Microsoft Azure. Your work will be relied upon by millions of customers globally and you will have learning opportunities and challenges around high-scale distributed systems. You’ll identify opportunities, design and scope new data projects, and apply advanced machine learning and statistical techniques to real-world challenges. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment for all employees to positively impact our culture every day. Qualifications Required/Minimum Qualifications: 5+ years customer-facing, project-delivery experience, professional services, and/or consulting experience. Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 1+ year(s) data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techn OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, or related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical tec OR equivalent experience. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Additional or Preferred Qualifications: Doctorate in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, OR related field AND 3+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Master's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, OR related field AND 5+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR Bachelor's Degree in Data Science, Mathematics, Statistics, Econometrics, Economics, Operations Research, Computer Science, OR related field AND 7+ years data-science experience (e.g., managing structured and unstructured data, applying statistical techniques and reporting results) OR equivalent experience. #azurecorejobs Responsibilities You will acquire the data necessary for your project plan and develop usable data sets for modeling. You’ll also update internal best practices for data collection and preparation, and contribute to data integrity conversations with customers. You will evaluate your team’s models and recommend improvements as necessary, drive best practices for models, and develop operational models that run at scale. You’ll also conduct thorough reviews of data analysis and modeling techniques, and identify and invent new evaluation methods. You will research and maintain a deep knowledge of the industry, including trends and technologies, so that you can identify strategy opportunities and contribute to thought leadership best practices. You’ll also write extensible code that spans multiple features, and develop expertise in proper debugging techniques. You will define business, customer, and solution strategy goals, and partner with other teams to identify and explore new opportunities. You’ll also apply a customer-oriented focus to understand their needs, and help drive realistic customer expectations. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 days ago
5.0 years
1 - 3 Lacs
Hyderābād
On-site
Job Description Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 2 days ago
8.0 - 13.0 years
2 - 4 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Specialist IS Architect What you will do Let’s do this. Let’s change the world. In this vital role, you will be responsible for designing and implementing information system architectures to support business needs. You will analyze requirements, develop architectural designs, evaluate technology solutions, and ensure alignment with industry best practices, Governance and standards. Your expertise in system architecture, strong problem-solving abilities, and ability to communicate complex technical concepts will enable you to deliver robust and scalable IT solutions. Architect, administer, manage, and maintain Amgen’s identity provisioning environment as well as support other identity related systems used to support authentication and authorization. Align new and existing applications and systems to IAM/RBAC framework Provide technical and governance oversight to all IdM projects. Serve as the technical architect in the analysis, design and implementation of all IdM related projects and be responsible for their successful delivery while meeting the overall security and integrity of the solution. Work with project teams to provide insights about architectural standards and information security best practices Monitor operational and performance statistics for managed systems to ensure reliability and availability, perform preventative maintenance, and automate routine procedures. Create KPIs to monitor growth statistics and resource forecasts. Develop and maintain the identity management architecture to ensure secure and efficient access controls. Create and maintain documentation for identity management processes, policies, and system architecture. Document incident response and remediation procedures for identity-related issues. Design provisioning solutions that align with business requirements and security standards. Stay updated on industry trends, tools, and technologies related to identity and access management. Evaluate and recommend new solutions and technologies to improve identity management practices. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The Specialist IS Architect professional we seek should possess these qualifications. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years of Information Systems experience or related field Experience integrating SailPoint with various applications, both on-premises and cloud-based. Strong understanding of identity governance concepts, including role-based access control (RBAC), access certification, and provisioning processes. Proficiency in identity management technologies (e.g., Okta, Azure AD, SailPoint). Understanding of provisioning protocols (e.g., SCIM, SAML, OAuth, OpenID Connect). Experience with APIs and integration techniques to connect identity management systems with various applications and services. Knowledge of directory services (e.g., LDAP, Active Directory). Sharp learning agility, problem-solving and analytical thinking. Familiarity with security frameworks (e.g., NIST, ISO 27001) and compliance regulations (e.g., GDPR, HIPAA). Ability to conduct risk assessments and vulnerability analysis. Understanding of user lifecycle management processes, including onboarding, offboarding, and role-based access control. Preferred Qualifications: Scripting skills such as PowerShell or Python Experience with IS Security Experience with Agile Methodology Proficiency in scripting and automation is a plus Professional Certifications: Microsoft, GCP or AWS Cloud (preferred) Identity Provisioning or Security Certification (preferred) SailPoint Certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team oriented, with a focus on achieving team goals Strong presentation and public speaking skills Working Hours : This role on occasion might have responsibilities outside of business hours. Travel: International and/or domestic travel up to 10% may be essential. Work Shift: Rotational What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 days ago
5.0 years
0 Lacs
Hyderābād
On-site
Job Description Overview We are seeking a skilled Associate Manager – AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.
Posted 2 days ago
2.0 years
0 Lacs
Hyderābād
On-site
Job Description Overview Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills – Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus
Posted 2 days ago
9.0 years
7 - 9 Lacs
Hyderābād
Remote
Job Description Overview Primary focus would be to lead development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. Role will lead key data lake projects and resources, including innovation related initiatives (e.g. adoption of technologies like Databricks, Presto, Denodo, Python,Azure data factory; database encryption; enabling rapid experimentation etc.)). This role will also have L3 and release management responsibilities for ETL processes Responsibilities Lead delivery of key Enterprise Data Warehouse and Azure Data Lake projects within time and budget Drive solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Manage work intake, prioritization and release timing; balancing demand and available resources. Ensure tactical initiatives are aligned with the strategic vision and business needs Oversee coordination and partnerships with Business Relationship Managers, Architecture and IT services teams to develop and maintain EDW and data lake best practices and standards along with appropriate quality assurance policies and procedures May lead a team of employee and contract resources to meet build requirements: Set priorities for the team to ensure task completion Coordinate work activities with other IT services and business teams. Hold team accountable for milestone deliverables Provide L3 support for existing applications Release management Qualifications Experience Bachelor’s degree in Computer Science, MIS, Business Management, or related field 9 + years’ experience in Information Technology or Business Relationship Management 5 + years’ experience in Data Warehouse/Azure Data Lake 3 years’ experience in Azure data lake 2 years’ experience in project management Technical Skills Thorough knowledge of data warehousing / data lake concepts Hands on experience on tools like Azure data factory, databricks, pyspark and other data management tools on Azure Proven experience in managing Data, BI or Analytics projects Solutions Delivery experience - expertise in system development lifecycle, integration, and sustainability Experience in data modeling or database experience; Non-Technical Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Experience dealing with and managing multiple vendors Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation Ability to budget resources and funding to meet project deliverables
Posted 2 days ago
10.0 years
5 - 9 Lacs
Hyderābād
On-site
Job Description Overview PepsiCo is looking for an experienced Active Directory and AzureAD/EntraID SME to help drive the enterprise directory strategy forward for the Identity and Access Management organization. As a member of the Directory Services team, the Directory Services Engineer will be responsible for architecture, design, developing, engineering, deploying, and supporting comprehensive solutions based on unique and complex requirements and problems related to identity and directory services. The Engineer will also be responsible for identifying opportunities for the automation of tasks, simplification of processes, and improve efficiencies in the environment. Skilled in troubleshooting complex technical issues. Works closely with enterprise architects to ensure adequate security solutions are in place to mitigate identified risks sufficiently, while meeting business objectives and regulatory requirements. Provides technical leadership and deliver complex projects. Responsibilities Provides subject matter expertise in solutioning and implementing AD/AzureAD requirements Provides advanced architecture and engineering skills to automate and administer AD/Azure AD and compliance requirements. Drives planning and execution of Directory Services roadmaps and technology enhancements. Creates and maintains standards surrounding documentation related to Directory Services processes, procedure and infrastructure. Assesses current applications and architecture to ensure current implementations align with industry guidelines, best practices and management approved standards. Collaborate with Solution Architects, application development teams, Cybersecurity staff, and the Infrastructure team to define the enterprise IAM strategy. Provide level 3 production support to help diagnose and troubleshoot production issues. Adapt the architecture to evolving security conditions and support security guidelines. Develop and deliver applicable documentation, training, and knowledge transfer to both internal and external stakeholders as necessary Foster the Agile DevOps culture through latest toolset to improve customer satisfaction through rapid, continuous delivery Analyze, design, and support a highly complex, enterprise-level Active Directory service in a hybrid on-premises and cloud-hosted environment. Manage enterprise identity cloud directories, including Microsoft AD and Azure AD. Translate business needs into workable technology solutions. Participate in or lead troubleshooting and incident resolution of complex high severity incidents Develop automated solutions using scripts, pipelines, and cloud-based server-less computing platforms Develop detailed architecture, standards, design, and implementation documentation Analyze the current Directory Services environment to identify technical and operational opportunities and develop continuous improvement action plans. Participate in disaster recovery, capacity planning, performance monitoring, and maintenance to ensure high availability. Build security models and manage Azure AD infrastructure and drive application migrations and integrations. Also support PAM solutions and infrastructure. Qualifications 10+ years in IT with focus in security and IAM 9+ years experience with engineering and design of Active Directory /Entra ID 5+ years experience with engineering, design and setting up Azure AD/Entra ID 9+ years in supporting Active Directory 5+ years supporting Azure Active Directory 3+ years building and managing PAM solutions like CyberArk PAM Bachelors in Engineering, Computer Science or related field Experience with developing, planning, and implementing a large scale enterprise-level Active Directory and Azure AD infrastructure, including but not limited to the following components: Domain Controller deployment Securing Active Directory Advanced GPO settings Advanced replication management Advanced auditing techniques Experience working with large-scale, enterprise-level LDAP / Active Directory / Azure AD / EntraID environments Hands on experience with building AD, Azure AD, application security models etc. Experience in Providing advanced architecture and engineering skills to automate and administer AD/Azure AD and compliance requirements. Knowledge of programming/scripting disciplines like the following: VBScript PowerShell Overall knowledge in security best practices Overall knowledge with Identity and Access solutions Good understanding of the latest security principles like zero trust and passwordless authentication to implement new standards in the authentication model Experience with governance and compliance, including the following: SOX controls Experience building and managing PKI and supporting infrastructure including HSM, EKCLM, CA etc.
Posted 2 days ago
6.0 years
0 Lacs
Hyderābād
On-site
Job Description: Essential Job Functions: Support cloud engineering teams in project delivery and management. Collaborate with senior managers to meet project goals and objectives. Assist in the implementation and optimization of cloud solutions. Mentor and guide junior team members to enhance their skills and knowledge. Contribute to project documentation and reporting. Participate in cloud cost optimization and performance enhancement efforts. Foster a culture of continuous learning and best practice adoption. Collaborate with cross-functional teams to ensure seamless integration of cloud solutions. Basic Qualifications: Bachelor's degree in a relevant field or equivalent combination of education and experience Typically, 6+ years of relevant work experience in industry, with a minimum of 2+ years in a similar role years of experience in software engineering or cloud engineering Proficiency in 1 or more software languages and development methodologies Strong understanding of cloud technologies Effective communication and teamwork skills Other Qualifications: Advanced degree in a related field a plus Relevant cloud certifications and experience with cloud providers (e.g., AWS, Azure) a plus At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 2 days ago
8.0 years
8 - 10 Lacs
Hyderābād
On-site
Job Description Overview We are seeking a self-driven, inquisitive & curious SRE Database Site reliability engineer that drives reliability, performance, and availability, including ensuring data security and access control of database systems leveraged by the frontend application and the business transactions in both SQL and NoSQL database systems This is a critical enabler achieving a high resiliency during operations and also continuously improving through design during the software development lifecycle. The SRE database support engineer is integral part of the global team with its main purpose to provide a delightful customer experience for the user of the global consumer, commercial, supply chain and enablement functions in the PepsiCo digital products application portfolio of 260+ applications, enabling a full SRE Practice incident prevention / proactive resolution model. The scope of this role is focussed on the Modern architected cloud native application portfolio It requires a blend of technical expertise of database administration / engineering, SRE tools, modern applications architecture, IT operations experience, and analytics & influence skills. Responsibilities Reporting directly to the Modern IT Operations SRE enablement Associate Director, is responsible to enable & execute the pre-emptive diagnosis of an PepsiCo DPA applications towards service performance, reliability and availability expected by our customers and internal groups Ensure database availability, performance, and security in production environments. Instrument, monitor, pro-actively collaborate with development teams to optimize schema design, indexing, and query plans. Automate tasks using scripts or infrastructure-as-code tools. Understanding of cloud infrastructure and services. Ability to design and implement database replication and failover solutions. Providing insights along with troubleshoot and resolve database-related incidents and outages Stay up to date with emerging database technologies and best practices. Work closely with customer facing support teams to evolve & empower them with SRE insights Ability to collaborate effectively with development and operations teams. Participate in on-call support and orchestrating blameless post-mortems and encourage the practise within the organization Provides inputs to the definition, collection and analysis of data relevant products systems and their interactions towards output resiliency of the IT ecosystem especially related impacting customer statisfaction, Revenue or IT productivity Actively engage and drive AI Ops adoption across teams Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. 8–12 years of professional experience as a Database administrator and or / database SRE with application knowledge Hands-on experience with Microsoft SQL Server, PostgreSQL, MySQL, and at least one leading NoSQL technology such as MongoDB, Cassandra, or Couchbase. Proficiency in writing complex SQL queries, stored procedures, and functions. Experience building self-heal scripts or remediation runbooks (Python, PowerShell, Bash)- Azure Logic Apps, Azure Functions- Integration with ServiceNow and AppDynamics APIs- Exposure with replication, clustering, and high availability setups. Experience with cloud platforms (AWS, Azure, Google Cloud Spanner, etc.). Solid understanding of database security, auditing, and compliance requirements. Familiarity with DevOps tools and practices (CI/CD, version control, infrastructure automation). Excellent problem-solving and analytical skills. Strong communication and documentation skills. Preferred Qualifications: Certifications such as Microsoft Certified: Azure Database Administrator Associate, MongoDB Certified DBA, or similar. Experience with cloud platforms (AWS RDS, Azure SQL, Google Cloud Spanner, etc.). Exposure to containerized database deployments using Docker or Kubernetes. Leadership and Soft skills: Driving for Results: Demonstrates perseverance and resilience in the pursuit of goals. Confronts and works to resolve tough issues. Exhibits a “can-do” attitude and a willingness to take on significant challenges Decision Making: Quickly analyses complex problems to find actionable, pragmatic solutions. Sees connections in data, events, trends, etc. Consistently works against the right priorities Collaborating: Collaborates well with others to deliver results. Keeps others informed so there are no unnecessary surprises. Effectively listens to and understands what other people are saying. Communicating and Influencing: Ability to build convincing, persuasive, and logical storyboards. Strong executive presence. Able to communicate effectively and succinctly, both verbally and on paper. Motivating and Inspiring Others: Demonstrates a sense of passion, enjoyment, and pride about their work. Demonstrates a positive attitude in the workplace. Embraces and adapts well to change. Creates a work environment that makes work rewarding and enjoyable.
Posted 2 days ago
7.0 - 9.0 years
2 - 6 Lacs
Hyderābād
On-site
Job Description Overview This role is designed for an experienced Business Analyst who will play a pivotal part in driving data-driven decision-making and process optimization for North America Data Product Management team.The ideal candidate will combine advanced analytics skills, deep SQL expertise, and practical data engineering knowledge with a strong understanding of the FMCG Domain. You will work cross-functionally to transform business requirements into actionable insights and scalable solutions, supporting both strategic and operational objectives. Responsibilities Business Process Analysis & Optimization Analyze existing business processes, identify improvement opportunities, and recommend solutions that enhance efficiency, reduce costs, and drive growth within the beverages sector. Collaborate with stakeholders to map and document end-to-end business processes and data flows. Data Analysis & Reporting Design, write, and optimize complex SQL queries to extract, manipulate, and analyze large datasets from multiple sources. Develop and maintain dashboards, reports, and KPIs that provide actionable insights to business leaders and operational teams. Requirements Gathering & Solution Design Engage with business stakeholders to gather, document, and prioritize business and functional requirements for analytics, reporting, and data engineering projects. Translate business needs into technical specifications for development teams, ensuring alignment with business goals. Data Engineering Support Work closely with data engineering teams to support the design, development, and maintenance of robust data pipelines and data models. Participate in data migration, integration, and transformation projects, ensuring data quality and integrity throughout. Domain Expertise & Stakeholder Engagement Leverage deep domain knowledge of the beverages industry to provide context for data analysis, interpret trends, and recommend relevant business actions. Act as a trusted advisor to business partners, fostering strong relationships and ensuring solutions are tailored to sector needs. Continuous Improvement & Innovation Stay up to date with industry trends, best practices, and new technologies in analytics, data engineering, and the beverages sector. Proactively identify and champion opportunities for process automation, digitalization, and innovation. Qualifications Education: Bachelor’s or Master’s degree in Business, Computer Science, Engineering, Statistics, or a related field. Experience: 7–9 years in business analysis, data analytics, or a related field within the consumer goods, beverages, or FMCG industry. SQL Expertise: Advanced proficiency in SQL for data extraction, manipulation, and analysis. Data Engineering: Experience working with data pipelines, ETL processes, and data modeling (hands-on or in close partnership with data engineering teams). Domain Knowledge: Strong understanding of the beverages industry, including market dynamics, supply chain, sales, and marketing operations. Analytical Thinking: Ability to synthesize complex data from multiple sources, identify trends, and provide clear, actionable recommendations. Communication: Excellent written and verbal communication skills; able to translate technical concepts for non-technical stakeholders and vice versa. Stakeholder Management: Proven ability to work cross-functionally, manage multiple priorities, and build strong relationships with business and technical teams. Problem-Solving: Solution-oriented mindset with a track record of driving process improvements and delivering business value. Preferred Qualifications Experience with data visualization tools (e.g., Power BI, Tableau). Familiarity with cloud data platforms (e.g., Azure, AWS, GCP). Knowledge of Python or R for data analysis (a plus). Previous experience in a data product or digital transformation environment.
Posted 2 days ago
5.0 years
2 - 3 Lacs
Hyderābād
On-site
Category: Business Consulting, Strategy and Digital Transformation Main location: India, Andhra Pradesh, Hyderabad Position ID: J0725-0862 Employment Type: Full Time Position Description: Job Title: Data EngineerExperience Level: 5+ YearsLocation: Hyderabad Job Summary We are looking for a seasoned and innovative Senior Data Engineer to join our dynamic data team. This role is ideal for professionals with a strong foundation in data engineering, coupled with hands-on experience in machine learning workflows, statistical analysis, and big data technologies. You will play a critical role in building scalable data pipelines, enabling advanced analytics, and supporting data science initiatives. Proficiency in Python is essential, and experience with PySpark is a strong plus. Key Responsibilities Data Pipeline Development: Design and implement scalable, high-performance ETL/ELT pipelines using Python and PySpark. ML & Statistical Integration: Collaborate with data scientists to integrate machine learning models and statistical analysis into data workflows. Data Modeling: Create and optimize data models (relational, dimensional, and columnar) to support analytics and ML use cases. Big Data Infrastructure: Manage and optimize data platforms such as Snowflake, Redshift, BigQuery, and Databricks. Performance Tuning: Monitor and enhance the performance of data pipelines and queries. Data Governance: Ensure data quality, integrity, and compliance through robust governance practices. Cross-functional Collaboration: Partner with analysts, scientists, and product teams to translate business needs into technical solutions. Automation & Monitoring: Automate data workflows and implement monitoring and alerting systems. Mentorship: Guide junior engineers and promote best practices in data engineering and ML integration. Innovation: Stay current with emerging technologies in data engineering, ML, and analytics. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in data engineering with a strong focus on Python and big data tools. Solid understanding of machine learning concepts and statistical analysis techniques. Proficiency in SQL and Python; experience with PySpark is highly desirable. Experience with cloud platforms (AWS, Azure, or GCP) and data tools (e.g., Glue, Data Factory, Dataflow). Familiarity with data warehousing and lakehouse architectures. Knowledge of data modeling techniques (e.g., star schema, snowflake schema). Experience with version control systems like Git. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and collaboration skills. Skills: English Data Engineering Python SQLite Statistical Analysis What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 2 days ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. Major Accountabilities of Position : a AI/ML Architect must have Defining, designing, and delivering ML architecture patterns operable in native and hybrid cloud architectures. Collaborate with Enterprise Architecture, Info Security, DevOps and Data Intelligence team to implement ML Solutions. Defining data augmentation pipelines for unstructured data like Documents, Engineering drawings etc. Build new network architecture in CNN/LSTM/RCNN or Develop wrapper for pre-trained models. Conduct feasibility of transfer learning fitment for given problem. Research, analyze, recommend, and select technical approaches to address challenging development and data integration problems related to ML Model training and deployment in Enterprise Applications. Perform research activities to identify emerging technologies (Generative AI) and trends that may affect the Data Science/ ML life-cycle management in enterprise application portfolio. Design and deploy AI/ML models in real-world environments and integrating AI/ML using Cloud native or hybrid technologies into large-scale enterprise applications .Demonstrated experience developing best practices and recommendations around tools/technologies for ML life-cycle capabilities such as Data collection, Data preparation, Feature Engineering, Model Management, ML Ops, Model Deployment approaches and Model monitoring and tuning. Knowledge / Experience / Competencies Required IT Skills & Experience (Priority wise): Hands-on programming and architecture capabilities in Python. Demonstrated technical expertise around architecting solutions around AI, ML, deep learning and Generative AI related technologies. Experience in implementing and deploying Machine Learning solutions (using various models, such as GPT-4, Lama2, Mistral ai, text embedding ada, Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Topic Modeling, Game Theory etc. ) Understanding of Nvidia Enterprise NEMO Suite. Expertise in popular deep learning frameworks, such as TensorFlow, PyTorch, and Keras, for building, training, and deploying neural network models. Experience in AI solution development with external SaaS products like Azure OCR Experience in the AI/ML components like Azure ML studio, Jupyter Hub, TensorFlow & Sci-Kit Learn Hands-on knowledge of API frameworks. Familiarity with the transformer architecture and its applications in natural language processing (NLP), such as machine translation, text summarization, and question-answering systems. 10. Expertise in designing and implementing CNNs for computer vision tasks, such as image classification, object detection, and semantic segmentation. Hands on experience in RDBMS, NoSQL, big data stores like: Elastic, Cassandra. Experience with open source software Experience using the cognitive APIs machine learning studios on cloud. Hands-on knowledge of image processing with deep learning ( CNN,RNN,LSTM,GAN) Familiarity with GPU computing and tools like CUDA and cu DNN to accelerate deep learning computations and reduce training times. Understanding of complete AI/ML project life cycle Understanding of data structures, data modelling and software architecture Good understanding of containerization and experience working with Docker, AKS. People Skills Clear and concise communication is vital for explaining complex machine learning concepts to non-technical stakeholders, presenting results, and collaborating with cross-functional teams. Ability to work independently and as part of a team. Being open to new ideas, embracing change, and adapting to evolving technologies and methodologies are crucial for staying relevant and effective in the rapidly changing field of machine learning. Cooperative mindset, flexibility, and the ability to work effectively in a team. Professional and open communication to all internal and external interfaces. Balancing multiple projects, prioritizing tasks, and meeting deadlines while maintaining a high standard of work requires effective time management and organizational skills. Accurately report to management in a timely and effective manner. Other Skills Outstanding analytical and problem-solving skills Education – Qualifications, Accreditation, Training Master’s in Information Technology / Big Data/Data Science/AI/Computer Science Minimum 4- Maximum 7 year experience as AI/ML Architect on AI and ML projects. Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-WB-Kolkata, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Head of Data Intelligence Duration of Contract 0
Posted 2 days ago
5.0 - 10.0 years
0 Lacs
Hyderābād
On-site
Job Description Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets: Business Intelligence tools (preferred—Power BI) DP-203 Certified.
Posted 2 days ago
12.0 years
5 - 9 Lacs
Hyderābād
On-site
Job Description Overview PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform Databricks SME, responsible for overseeing the Platform administration, Security, new NPI tools integration, migrations, platform maintenance and other platform administration activities on Azure/AWS.The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Databricks Subject Matter Expert (SME) plays a pivotal role in admin, security best practices, platform sustain support, new tools adoption, cost optimization, supporting new patterns/design solutions using the Databricks platform. Here’s a breakdown of typical responsibilities: Core Technical Responsibilities Architect and optimize big data pipelines using Apache Spark, Delta Lake, and Databricks-native tools. Design scalable data ingestion and transformation workflows, including batch and streaming (e.g., Kafka, Spark Structured Streaming). Create integration guidelines to configure and integrate Databricks with other existing security tools relevant to data access control. Implement data security and governance using Unity Catalog, access controls, and data classification techniques. Support migration of legacy systems to Databricks on cloud platforms like Azure, AWS, or GCP. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Collaboration & Advisory Act as a technical advisor to data engineering and analytics teams, guiding best practices and performance tuning. Partner with architects and business stakeholders to align Databricks solutions with enterprise goals. Lead proof-of-concept (PoC) initiatives to demonstrate Databricks capabilities for specific use cases. Strategic & Leadership Contributions Mentor junior engineers and promote knowledge sharing across teams. Contribute to platform adoption strategies, including training, documentation, and internal evangelism. Stay current with Databricks innovations and recommend enhancements to existing architectures. Specialized Expertise (Optional but Valuable) Machine Learning & AI integration using MLflow, AutoML, or custom models. Cost optimization and workload sizing for large-scale data processing. Compliance and audit readiness for regulated industries. Qualifications Bachelor’s degree in computer science. At least 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 5 years in a Platform admin role Strong understanding of data security principles and best practices. Expertise in Databricks platform, security features, Unity Catalog, and data access control mechanisms. Experience with data classification and masking techniques. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS/Databricks platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 2 days ago
3.0 - 5.0 years
30 - 35 Lacs
Hyderābād
On-site
Job Title: Data Scientist / Machine Learning Specialist Location: Hyderabad (Hybrid Model) Experience: 3 to 5 Years Compensation: Up to ₹30 LPA Joining: Immediate or Short Notice Preferred About the Role: We are looking for a highly skilled and motivated Machine Learning Specialist / Data Scientist with a strong foundation in data science and a deep understanding of clinical supply chain or supply chain operations. This individual will play a critical role in developing predictive models, optimizing logistics, and enabling data-driven decision-making within our clinical trial supply chain ecosystem. Key Responsibilities: * Design, develop, and deploy machine learning models for demand forecasting, inventory optimization, and supply chain efficiency * Analyze clinical trial and logistics data to uncover insights and enable proactive planning * Collaborate with cross-functional teams including clinical operations, IT, and supply chain to integrate ML solutions into workflows * Build interactive dashboards and tools for real-time analytics and scenario modeling * Ensure models are scalable, maintainable, and compliant with regulatory frameworks (e.g., GxP, 21 CFR Part 11) * Stay up to date with the latest advancements in ML/AI and bring innovative solutions to complex clinical supply challenges Required Qualifications: * Master’s or Ph.D. in Computer Science, Data Science, Engineering, or a related field * 3–5 years of hands-on experience in machine learning, data science, or AI (preferably in healthcare or life sciences) * Proven experience with clinical or supply chain operations such as demand forecasting, IRT systems, and logistics planning * Proficiency in Python, R, SQL, and ML frameworks like scikit-learn, TensorFlow, or PyTorch * Solid knowledge of statistical modeling, time series forecasting, and optimization techniques * Strong analytical mindset and excellent communication skills * Ability to thrive in a fast-paced, cross-functional environment Preferred Qualifications: * Experience working with clinical trial systems and data (e.g., EDC, CTMS, IRT) * Understanding of regulatory requirements in clinical research * Familiarity with cloud platforms such as AWS, Azure, or GCP * Exposure to MLOps practices for model deployment and monitoring Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Data science: 3 years (Required) Machine learning: 3 years (Preferred) Python: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person
Posted 2 days ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities Strategy & Planning Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. Ensure that data strategies and architectures are aligned with regulatory compliance. Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. Ensure effective data management throughout the project lifecycle. Acquisition & Deployment Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. Data Architecture Design: Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. Design and implement scalable, high-performance data solutions that meet business requirements. Data Governance: Establish and enforce data governance policies and procedures as agreed with stakeholders. Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. Data Migration: Oversee the data migration process from legacy systems to the new systems being put in place. Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. Master Data Management: Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. Provide data management (create, update and delimit) methods to ensure master data is governed Stakeholder Collaboration: Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. Ensure the enterprise system meets the organization's data needs. Training and Support: Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. Promote user adoption and proper use of data. 10 Data Quality Assurance: Implement data quality assurance measures to identify and correct data issues. Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. Reporting and Analytics: Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems Enable data-driven decision-making through robust data analysis. Continuous Improvement: Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral A self-starter, an excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements
Posted 2 days ago
10.0 - 18.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. The Role As a Data Science Lead with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc…. Conceptualise, build and manage AI/ML (more focus on unstructured data) platform by evaluating and selecting best in industry AI/ML tools and frameworks Drive and take ownership for developing cognitive solutions for internal stakeholders & external customers. Conduct research in various areas like Explainable AI, Image Segmentation,3D object detections and Statistical Methods Evaluate not only algorithms & models but also available tools & technologies in the market to maximize organizational spend. Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI/ML applications that scale from multi-user to enterprise class. Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations Analyse marketplace trends - economical, social, cultural and technological - to identify opportunities and create value propositions. Offer a global perspective in stakeholder discussions and when shaping solutions/recommendations IT Skills & Experience Thorough understanding of complete AI/ML project life cycle to establish processes & provide guidance & expert support to the team. Expert knowledge of emerging technologies in Deep Learning and Reinforcement Learning Knowledge of MLOps process for efficient management of the AI/ML projects. Must have lead project execution with other data scientists/ engineers for large and complex data sets Understanding of machine learning algorithms, such as k-NN, GBM, Neural Networks Naive Bayes, SVM, and Decision Forests. Experience in the AI/ML components like Jupyter Hub, Zeppelin Notebook, Azure ML studio, Spark ML lib, TensorFlow, Tensor flow,Keras, Py-Torch and Sci-Kit Learn etc Strong knowledge of deep learning with special focus on CNN/R-CNN/LSTM/Encoder/Transformer architecture Hands-on experience with large networks like Inseption-Resnets,ResNeXT-50. Demonstrated capability using RNNS for text, speech data, generative models Working knowledge of NoSQL (GraphX/Neo4J), Document, Columnar and In-Memory database models Working knowledge of ETL tools and techniques, such as Talend,SAP BI Platform/SSIS or MapReduce etc. Experience in building KPI /storytelling dashboards on visualization tools like Tableau/Zoom data People Skills Professional and open communication to all internal and external interfaces. Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture Strong analytical skills. Industry Specific Experience 10 -18 Years of experience of AI/ML project execution and AI/ML research Education – Qualifications, Accreditation, Training Master or Doctroate degree Computer Science Engineering/Information Technology /Artificial Intelligence Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Head of Data Intelligence
Posted 2 days ago
10.0 - 12.0 years
0 Lacs
Hyderābād
On-site
Job Description Overview PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform technology leader, responsible for overseeing the design, deployment, and maintenance of Enterprise Data Foundation cloud infrastructure initiative on Azure/AWS.The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Cloud Infrastructure & Automation Manage and mentor a team of cloud platform infrastructure SMEs, providing technical leadership and direction. Provide guidance and support for application migration, modernization, and transformation projects, leveraging cloud-native technologies and methodologies. Implement cloud infrastructure policies, standards, and best practices, ensuring cloud environment adherence to security and regulatory requirements. Design, deploy and optimize cloud-based infrastructure using Azure/AWS services that meet the performance, availability, scalability, and reliability needs of our applications and services. Drive troubleshooting of cloud infrastructure issues, ensuring timely resolution and root cause analysis by partnering with global cloud center of excellence & enterprise application teams, and PepsiCo premium cloud partners (Microsoft, AWS). Establish and maintain effective communication and collaboration with internal and external stakeholders, including business leaders, developers, customers, and vendors. Develop Infrastructure as Code (IaC) to automate provisioning and management of cloud resources. Write and maintain scripts for automation and deployment using PowerShell, Python, or Azure/AWS CLI. Work with stakeholders to document architectures, configurations, and best practices. Knowledge of cloud security principles around data protection, identity and access Management (IAM), compliance and regulatory, threat detection and prevention, disaster recovery and business continuity. Qualifications Bachelor’s degree in computer science. At least 10 to 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 8 years in a technical leadership role Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 2 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team . Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Digital Solutions Consultant with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. Design, Develop, and Maintain Web Applications: This covers the core responsibility of building and maintaining web applications using .NET and Angular frameworks. Collaborate with Cross-Functional Teams: Working with other teams (e.g., designers, product managers, QA) to define, design, and ship new features. Write Clean, Scalable, and Efficient Code: Writing code that follows best practices and coding standards to ensure it is clean, scalable, and efficient. Troubleshoot, Debug, and Optimize Application Performance: Identifying, diagnosing, and resolving issues to ensure the application performs optimally Participate in Code Reviews: Reviewing code written by peers to maintain code quality and share knowledge within the team. Ensure Technical Feasibility of UI/UX Designs: Assessing and implementing UI/UX designs to ensure they are technically feasible and effectively implemented. Stay Updated with Industry Trends and Best Practices: Keeping abreast of the latest industry trends, technologies, and best practices to continuously improve skills and development practices. About You To be considered for this role it is envisaged you will possess the following attributes: Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Full Stack Developer with 3 to 4 years of expertise in .NET and Angular Strong proficiency in C#, ASP.NET, and .NET Core. Extensive experience with Angular (preferably Angular 2+). Solid understanding of front-end technologies including HTML5, CSS3, JavaScript, and TypeScript. Experience with RESTful APIs and web services. Familiarity with database technologies such as SQL Server, Entity Framework, and LINQ. Knowledge of version control systems like Git. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Testing and Debugging: Conduct unit testing, integration testing, and end-to-end testing to ensure high-quality code. Debug and resolve issues in a timely manner to maintain application performance and reliability. Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture. Ability to work independently and as part of a team. Professional and open communication to all internal and external interfaces. Accurately report to management in a timely and effective manner. Experience with cloud platforms such as Azure or AWS. Familiarity with Agile/Scrum development methodologies. Knowledge of DevOps practices and CI/CD pipelines. Experience with other front-end frameworks or libraries (e.g., React). Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-MM-Mumbai Other Locations IND-KR-Bangalore, IND-MM-Pune, IND-TN-Chennai, IND-MM-Navi Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Manager
Posted 2 days ago
5.0 years
2 - 7 Lacs
Hyderābād
Remote
At Meazure Learning, we believe in transforming learning and assessment experiences to unlock human potential. As a global leader in online testing and exam services, we support credentialing, licensure, workforce education, and higher education through purpose-built solutions that are secure, accessible, and deeply human-centered. With a global footprint across the U.S., Canada, India, and the U.K., our team is united by a passion for innovation and a commitment to integrity, quality, and learner success. About the Role We are looking for a seasoned Sr. DevOps Engineer to help us scale, secure, and optimize our infrastructure and deployment processes. This role is critical to enabling fast, reliable, and high-quality software delivery across our global engineering teams. You’ll be responsible for designing and maintaining cloud-based systems, automating operational workflows, and collaborating across teams to improve performance, observability, and uptime. The ideal candidate is hands-on, proactive, and passionate about creating resilient systems that support product innovation and business growth. Join Us and You’ll… Help define and elevate the user experience for learners and professionals around the world Collaborate with talented, mission-driven colleagues across regions Work in a culture that values trust, innovation, and transparency Have the opportunity to grow, lead, and make your mark in a high-impact, global organization Key Responsibilities Design, implement, and maintain scalable, secure, and reliable CI/CD pipelines Manage and optimize cloud infrastructure (e.g., AWS, Azure) and container orchestration (e.g., Kubernetes) Drive automation across infrastructure and development workflows Build and maintain monitoring, alerting, and logging systems to ensure reliability and observability Collaborate with Engineering, QA, and Security teams to deliver high-performing, compliant solutions Troubleshoot complex system issues in staging and production environments Guide and mentor junior engineers and contribute to DevOps best practices Desired Attributes: Key Skills 5+ years of experience in a DevOps or Site Reliability Engineering role Deep knowledge of cloud infrastructure (AWS, Azure, or GCP) Proficiency with containerization (Docker, Kubernetes) and Infrastructure as Code tools (Terraform, CloudFormation) Hands-on experience with writing code Hands-on experience with CI/CD platforms (Jenkins, GitHub Actions, or similar) Strong scripting capabilities (Bash, Python, or PowerShell) Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, or Datadog) A problem-solver with excellent communication and collaboration skills The Total Rewards - The Benefits Company Sponsored Health Insurance Competitive Pay Healthy Work Culture Career Growth Opportunities Learning and Development Opportunities Referral Award Program Company Provided IT Equipment (for remote team members) Transportation Program (on-site team members) Company Provided Meals (on-site team members) 14 Company Provided Holidays Generous Leave Program Learn more at www.meazurelearning.com Meazure Learning is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: Meazure Learning is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at Meazure Learning are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. Meazure Learning will not tolerate discrimination or harassment based on any of these characteristics.
Posted 2 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role OSTTRA India The Role: MQ Administrator The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures. The Impact Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets What’s In It For You We are looking for highly motivated technology professionals who will strengthen our specialisms and champion our uniqueness to create a company that is collaborative, respectful, and inclusive to all. Responsibilities Strong Experience administering, migrating, installing, upgrading, and configuring IBM MQ (Latest version – MQ 9.3, MQ 9.4) Support for IBM MQ on Linux servers/Windows OS. Provide build support including troubleshooting, timely resolution, and root cause analysis. Support Incident, problem and change management tasks. Implement change requests based on CM processes. Support Client related MQ requests efficiently Operational Support – Flexible to provide on-call support when required on weekends and holidays What We’re Looking For Degree in Computer Science, IT, or equivalent area of technical study. Minimum 5+ years of Industry experience in IBM MQ Administration and Build. Design, Install, configure, and maintain IBM MQ infrastructure in collaboration with Developments teams Good Experience in MQ troubleshooting (Prod Support. Hands-on experience with creating and setting up MQ Qmgrs in the AWS environment. Plan, test, and implement MQ version updates as well as patching activities in the underlying operating system. Experience with management of SSL, TLS, data encryption, and certificates. Experience in setting up Multi-Instance/RDQM/Native HA set up in MQ. Knowledge of integration protocols and standards, including Message Queuing (MQ, JMS), and File Transfer (FTP/SFTP} Queue manager cluster setup for workload management Experience in web Methods security management such as certificate management, authentication, and authorization Experience in cloud technologies (Azure, AWS, GCP) would be an added advantage. Experience in Windows Server Clusters/Linux OS. Knowledge of Configuring MQ Monitoring Tools - like Grafana/ITAM will be an added advantage. The Location : Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise ,processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 292807 Posted On: 2025-06-18 Location: Gurgaon, Haryana, India
Posted 2 days ago
6.0 years
0 Lacs
Hyderābād
On-site
Job Description Overview In this role, we are seeking an Associate Manager – Offshore Program & Delivery Management to oversee program execution, governance, and service delivery across DataOps, BIOps, AIOps, MLOps, Data IntegrationOps, SRE, and Value Delivery programs. This role requires expertise in offshore execution, cost optimization, automation strategies, and cross-functional collaboration to enhance operational excellence. Manage and support DataOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in real-time monitoring, automated alerting, and self-healing mechanisms to improve system reliability and performance. Contribute to the development and enforcement of governance models and operational frameworks to streamline service delivery and execution roadmaps. Support the standardization and automation of pipeline workflows, report generation, and dashboard refreshes to enhance efficiency. Collaborate with global teams to support Data & Analytics transformation efforts and ensure sustainable, scalable, and cost-effective operations. Assist in proactive issue identification and self-healing automation, enhancing the sustainment capabilities of the PepsiCo Data Estate. Responsibilities Support DataOps and SRE operations, assisting in offshore delivery of DataOps, BIOps, Data IntegrationOps, and related initiatives. Assist in implementing governance frameworks, tracking KPIs, and ensuring adherence to operational SLAs. Contribute to process standardization and automation efforts, improving service efficiency and scalability. Collaborate with onshore teams and business stakeholders, ensuring alignment of offshore activities with business needs. Monitor and optimize resource utilization, leveraging automation and analytics to improve productivity. Support continuous improvement efforts, identifying operational risks and ensuring compliance with security and governance policies. Assist in managing day-to-day DataOps activities, including incident resolution, SLA adherence, and stakeholder engagement. Participate in Agile work intake and management processes, contributing to strategic execution within data platform teams. Provide operational support for cloud infrastructure and data services, ensuring high availability and performance. Document and enhance operational policies and crisis management functions, supporting rapid incident response. Promote a customer-centric approach, ensuring high service quality and proactive issue resolution. Assist in team development efforts, fostering a collaborative and agile work environment. Adapt to changing priorities, supporting teams in maintaining focus on key deliverables. Qualifications 6+ years of technology experience in a global organization, preferably in the CPG industry. 4+ years of experience in Data & Analytics, with a foundational understanding of data engineering, data management, and operations. 3+ years of cross-functional IT experience, working with diverse teams and stakeholders. 1–2 years of leadership or coordination experience, supporting team operations and service delivery. Strong communication and collaboration skills, with the ability to convey technical concepts to non-technical audiences. Customer-focused mindset, ensuring high-quality service and responsiveness to business needs. Experience in supporting technical operations for enterprise data platforms, preferably in a Microsoft Azure environment. Basic understanding of Site Reliability Engineering (SRE) practices, including incident response, monitoring, and automation. Ability to drive operational stability, supporting proactive issue resolution and performance optimization. Strong analytical and problem-solving skills, with a continuous improvement mindset. Experience working in large-scale, data-driven environments, ensuring smooth operations of business-critical solutions. Ability to support governance and compliance initiatives, ensuring adherence to data standards and best practices. Familiarity with data acquisition, cataloging, and data management tools. Strong organizational skills, with the ability to manage multiple priorities effectively.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough