Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Compelling Opportunity for ML Engineer with Innovative Entity in Insurance Industry Employment | Immediate Location: Hyderabad, India Reporting Manager: Head of Analytics Work Pattern: Full Time, 5 days in the office Minimum Experience as a ML Engineer : 3 to 5 years Position Overview: The Innovative Entity in Insurance Industry is seeking an experienced Machine Learning Engineer with 3 to 5 years of hands-on experience in designing, developing, and deploying machine learning models and systems. The ideal candidate will work closely with data scientists, software engineers, and product teams to create solutions that drive business value. You will be responsible for building scalable and efficient machine learning pipelines, optimizing model performance, and integrating models into production environments. Key Responsibilities: · Model Development & Training: Develop and train machine learning models, including supervised, unsupervised, and deep learning algorithms, to solve business problems. · Data Preparation: Collaborate with data engineers to clean, preprocess, and transform raw data into usable formats for model training and evaluation. · Model Deployment & Monitoring: Deploy machine learning models into production environments, ensuring seamless integration with existing systems and monitoring model performance. · Feature Engineering: Create and test new features to improve model performance, and optimize feature selection to reduce model complexity. · Algorithm Optimization: Research and implement state-of-the-art algorithms to improve model accuracy, efficiency, and scalability. · Collaborative Development: Work closely with data scientists, engineers, and other stakeholders to understand business requirements, develop ML models, and integrate them into products and services. · Model Evaluation: Conduct model evaluation using statistical tests, cross-validation, and A/B testing to ensure reliability and generalizability. · Documentation & Reporting: Maintain thorough documentation of processes, models, and systems. Provide insights and recommendations based on model results to stakeholders. · Code Reviews & Best Practices: Participate in peer code reviews, and ensure adherence to coding best practices, including version control (Git), testing, and continuous integration. · Stay Updated on Industry Trends: Keep abreast of new techniques and advancements in the field of machine learning, and suggest improvements for internal processes and models. Required Skills & Qualifications · Education : Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. · Experience: 3 to 5 years of hands-on experience working as a machine learning engineer or in a related role. · Programming Languages: Proficiency in Python (preferred), R, or Java. Experience with ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. · Data Manipulation: Strong knowledge of SQL and experience working with large datasets (e.g., using tools like Pandas, NumPy, Spark). · Cloud Services: Experience with cloud platforms like AWS, Google Cloud, or Azure, particularly with ML services such as SageMaker or AI Platform. · Model Deployment: Hands-on experience with deploying ML models using Docker, Kubernetes, and CI/CD pipelines. · Problem-Solving Skills: Strong analytical and problem-solving skills with the ability to understand complex data problems and implement effective solutions. · Mathematics and Statistics: A solid foundation in mathematical concepts related to ML, such as linear algebra, probability, statistics, and optimization techniques. · Communication Skills: Strong verbal and written communication skills to collaborate effectively with cross-functional teams and stakeholders. Preferred Qualifications: · Experience with deep learning frameworks (e.g., TensorFlow, PyTorch). · Exposure to natural language processing (NLP), computer vision, or recommendation systems. · Familiarity with version control systems (e.g., Git) and collaborative workflows. · Experience with model interpretability and fairness techniques. · Familiarity with big data tools (e.g., Hadoop, Spark, Kafka). Screening Criteria · Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. · 3 to 5 years of hands-on experience working as a machine learning engineer or in a related role. · Proficiency in Python · Experience with ML libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. · Strong knowledge of SQL · Experience working with large datasets (e.g., using tools like Pandas, NumPy, Spark). · Experience with cloud platforms like AWS, Google Cloud, or Azure, particularly with ML services such as SageMaker or AI Platform · Hands-on experience with deploying ML models using Docker, Kubernetes, and CI/CD pipelines · A solid foundation in mathematical concepts related to ML, such as linear algebra, probability, statistics, and optimization techniques. · Available to work from office in Hyderabad · Available to join within 30 days Considerations · Location – Hyderabad · Working from office · 5 day working Evaluation Process Round 1 – HR Round Round 2 & 3 – Technical Round Round 4 – Discussion with CEO Interested Profiles, Kindly Apply Note o Additional inputs to be gathered from the candidate to put together the application Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description PayPay is looking for an experienced Cloud-Based AI and ML Engineer. This role involves leveraging cloud-based AI/ML Services to build infrastructure as well as developing, deploying, and maintaining ML models, collaborating with cross-functional teams, and ensuring scalable and efficient AI solutions particularly on Amazon Web Services (AWS). Main Responsibilities 1. Cloud Infrastructure Management : - Architect and maintain cloud infrastructure for AI/ML projects using AWS tools. - Implement best practices for security, cost management, and high-availability. - Monitor and manage cloud resources to ensure seamless operation of ML services. 2. Model Development and Deployment : - Design, develop, and deploy machine learning models using AWS services such as SageMaker. - Collaborate with data scientists and data engineers to create scalable ML workflows. - Optimize models for performance and scalability on AWS infrastructure. - Implement CI/CD pipelines to streamline and accelerate the model development and deployment process. - Set up a cloud-based development environment for data engineers and data scientists to facilitate model development and exploratory data analysis - Implement monitoring, logging, and observability to streamline operations and ensure efficient management of models deployed in production. 3. Data Management : - Work with structured and unstructured data to train robust ML models. - Use AWS data storage and processing services like S3, RDS, Redshift, or DynamoDB. - Ensure data integrity and compliance with set Security regulations and standards. 4. Collaboration and Communication : - Collaborate with cross-functional teams including DevOps, Data Engineering, and Product Management teams. - Communicate technical concepts effectively to non-technical stakeholders. 5. Continuous Improvement and Innovation : - Stay updated with the latest advancements in AI/ML technologies and AWS services. - Provide through Automations means for developers to easily develop and deploy their AI/ML models on AWS. Tech Stack - AWS: - VPC, EC2, ECS, EKS, Lambda, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK,Amazon Kinesis, CodeCommit, CodeBuild, CodeDeploy, CodePipeline, AWS Lake Formation, AWS Glue, SageMaker and other AI Services. - Terraform, Github Actions, Prometheus, Grafana, Atlantis - OSS (Administration experience on these tools) - Jupyter, MLFlow, Argo Workflows, Airflow Required Skills and Experiences - More than 5+ years of technical experience in cloud-based infrastructure with a focus on AI and ML platforms - Extensive technical hands-on experience with computing, storage, and analytical services on AWS. - Demonstrated skill in programming and scripting languages, including Python, Shell Scripting, Go, and Rust. - Experience with infrastructure as code (IAC) tools in AWS, such as Terraform, CloudFormation, and CDK. - Proficient in Linux internals and system administration. - Experience in production level infrastructure change management and releases for business-critical systems. - Experience in Cloud infrastructure and platform systems availability, performance and cost management. - Strong understanding of cloud security best practices and payment industry compliance standards. - Experience with cloud services monitoring, detection, and response, as well as performance tuning and cost control. - Familiarity with cloud infrastructure service patching and upgrades. - Excellent oral, written, and interpersonal communication skills. Preferred Qualifications - Bachelor’s degree and above in a technology related field - Experience with other cloud service providers (e.g GCP, Azure) - Experience with Kubernetes - Experience with Event-Driven Architecture (Kafka preferred) - Experience using and contributing to Open Source tools - Experience in managing IT compliance and security risk - Published papers / blogs / articles - Relevant and verifiable certifications Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team. Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: Traya is an Indian direct-to-consumer hair care brand platform provides a holistic treatment for consumers dealing with hairloss. The Company provides personalized consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathizing with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Responsibilities: Data Analysis and Exploration: Conduct in-depth analysis of large and complex datasets to identify trends, patterns, and anomalies. Perform exploratory data analysis (EDA) to understand data distributions, relationships, and quality. Machine Learning and Statistical Modeling: Develop and implement machine learning models (e.g., regression, classification, clustering, time series analysis) to solve business problems. Evaluate and optimize model performance using appropriate metrics and techniques. Apply statistical methods to design and analyze experiments and A/B tests. Implement and maintain models in production environments. Data Engineering and Infrastructure: Collaborate with data engineers to ensure data quality and accessibility. Contribute to the development and maintenance of data pipelines and infrastructure. Work with cloud platforms (e.g., AWS, GCP, Azure) and big data technologies (e.g., Spark, Hadoop). Communication and Collaboration: Effectively communicate technical findings and recommendations to both technical and non-technical audiences. Collaborate with product managers, engineers, and other stakeholders to define and prioritize projects. Document code, models, and processes for reproducibility and knowledge sharing. Present findings to leadership. Research and Development: Stay up-to-date with the latest advancements in data science and machine learning. Explore and evaluate new tools and techniques to improve data science capabilities. Contribute to internal research projects. Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field. 3-5 years of experience as a Data Scientist or in a similar role. Leverage SageMaker's features, including SageMaker Studio, Autopilot, Experiments, Pipelines, and Inference, to optimize model development and deployment workflows. Proficiency in Python and relevant libraries (e.g., scikit-learn, pandas, NumPy, TensorFlow, PyTorch). Solid understanding of statistical concepts and machine learning algorithms. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Experience deploying models to production. Experience with version control (Git) Preferred Qualifications: Experience with specific industry domains (e.g., e-commerce, finance, healthcare). Experience with natural language processing (NLP) or computer vision. Experience with building recommendation engines. Experience with time series forecasting. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask . Implement gRPC services , event-driven systems ( Kafka, PubSub ), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development : data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow , Dagster , SageMaker , and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation , A/B testing , and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python , FastAPI/Flask , gRPC, and event-driven architectures. Experience with CI/CD , infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices : feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration . Familiarity with tools like Airflow/Dagster , SageMaker , and data pipeline architecture . Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
Lead Agentic AI Developer Gurgaon, India; Hyderabad, India; Bangalore, India Information Technology 316524 Job Description About The Role: Grade Level (for internal use): 12 Lead Agentic AI Developer Location: Gurgaon, Hyderabad and Bangalore Job Description: A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development : Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship : Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement : Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance : Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research : Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership : Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation : Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership : Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. What We’re Looking For : REQUIRED SKILLS/QUALIFICATIONS Technical Expertise : 8+ years as a Senior AI Engineer , ML Architect , or AI Solutions Lead , with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js . Hands-on experience with autonomous AI tools : LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership : Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI : Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication : Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice to have : Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps tools: Kubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316524 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India
Posted 1 month ago
7.0 years
0 Lacs
Gurugram, Haryana
On-site
Agentic AI Architect Gurgaon, India; Hyderabad, India; Bangalore, India Information Technology 316525 Job Description About The Role: Grade Level (for internal use): 13 Location: Gurgaon, Hyderabad and Bangalore Job Description: We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications: Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect , ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks : LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen: Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316525 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India
Posted 1 month ago
7.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 13 Location: Gurgaon, Hyderabad and Bangalore Job Description: We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications: Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect , ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks : LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen: Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316525 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 12 Lead Agentic AI Developer Location: Gurgaon, Hyderabad and Bangalore Job Description: A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development : Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship : Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement : Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance : Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research : Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership : Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation : Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership : Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. What We’re Looking For : REQUIRED SKILLS/QUALIFICATIONS Technical Expertise : 8+ years as a Senior AI Engineer , ML Architect , or AI Solutions Lead , with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js . Hands-on experience with autonomous AI tools : LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership : Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI : Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication : Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice to have : Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps tools: Kubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316524 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 766481 Join our Team About this Opportunity The complexity of running and optimizing the next generation of wireless networks, such as 5G with distributed edge compute, will require Machine Learning (ML) and Artificial Intelligence (AI) technologies. Ericsson is setting up an AI Accelerator Hub in India to fast-track our strategy execution, using Machine Intelligence (MI) to drive thought leadership, automate, and transform Ericsson’s offerings and operations. We collaborate with academia and industry to develop state-of-the-art solutions that simplify and automate processes, creating new value through data insights. What you will do As a Senior Data Scientist, you will apply your knowledge of data science and ML tools backed with strong programming skills to solve real-world problems. Responsibilities: 1. Lead AI/ML features/capabilities in product/business areas 2. Define business metrics of success for AI/ML projects and translate them into model metrics 3. Lead end-to-end development and deployment of Generative AI solutions for enterprise use cases 4. Design and implement architectures for vector search, embedding models, and RAG systems 5. Fine-tune and evaluate large language models (LLMs) for domain-specific tasks 6. Collaborate with stakeholders to translate vague problems into concrete Generative AI use cases 7. Develop and deploy generative AI solutions using AWS services such as SageMaker, Bedrock, and other AWS AI tools. Provide technical expertise and guidance on implementing GenAI models and best practices within the AWS ecosystem. 8. Develop secure, scalable, and production-grade AI pipelines 9. Ensure ethical and responsible AI practices 10. Mentor junior team members in GenAI frameworks and best practices 11. Stay current with research and industry trends in Generative AI and apply cutting-edge techniques 12. Contribute to internal AI governance, tooling frameworks, and reusable components 13. Work with large datasets including petabytes of 4G/5G networks and IoT data 14. Propose/select/test predictive models and other ML systems 15. Define visualization and dashboarding requirements with business stakeholders 16. Build proof-of-concepts for business opportunities using AI/ML 17. Lead functional and technical analysis to define AI/ML-driven business opportunities 18. Work with multiple data sources and apply the right feature engineering to AI models 19. Lead studies and creative usage of new/existing data sources What you will bring Required Experience - min 7 years 1. Bachelors/Masters/Ph.D. in Computer Science, Data Science, AI, ML, Electrical Engineering, or related disciplines from reputed institutes 2. 3+ years of applied ML/AI production-level experience 3. Strong programming skills (R/Python) 4. Proven ability to lead AI/ML projects end-to-end 5. Strong grounding in mathematics, probability, and statistics 6. Hands-on experience with data analysis, visualization techniques, and ML frameworks (Python, R, H2O, Keras, TensorFlow, Spark ML) 7. Experience with semi-structured/unstructured data for AI/ML models 8. Strong understanding of building AI models using Deep Neural Networks 9. Experience with Big Data technologies (Hadoop, Cassandra) 10. Ability to source and combine data from multiple sources for ML models Preferred Qualifications: 1. Good communication skills in English 2. Certifying MI MOOCs, a plus 3. Domain knowledge in Telecommunication/IoT, a plus 4. Experience with data visualization and dashboard creation, a plus 5. Knowledge of Cognitive models, a plus 6. Experience in partnering and collaborative co-creation in a global matrix organization. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language - 2+ years of relevant experience in developing and deploying large-scale machine learning or deep learning models/systems into production, batch and real-time data processing, model containerization, CI/CD pipelines, API development, model training and productionizing ML models, and using Python and frameworks such as PyTorch, TensorFlow We are seeking an exceptional Machine Learning Engineer to join a team of experts in the field of AI/ML, and work together to tackle challenging business problems across diverse compliance domains. We leverage and train state-of-the-art multi-modal, large-language-models (LLMs), and vision language models (VLMs) to detect illegal and unsafe products across the Amazon catalog. We work on machine learning problems for generative AI, agentic systems, multi-modal classification, intent detection, information retrieval, anomaly and fraud detection. As a machine learning engineer, you will work with a highly skilled cross-functional team to invent, design, build and manage scalable ML systems. You will be solving challenging customer problems that are yet to be solved, conduct rapid prototyping, and deploy ML models to production. You will be using the latest innovations in AI, AWS, and industry technologies to build software, keeping our customers safe. If you’d like to make a real-world difference by working hard, having fun, and making history, this is the team for you! Key job responsibilities In this role you will: - Contribute to defining the system architecture, own implementation of specific components, and help shape the overall experience - Collaborate closely with other team members to help define the scope of the product - Take responsibility for technical problem solving to creatively meeting objectives, while insisting on best practices - Write high-quality, efficient, testable code in Python and other object-oriented languages - Design Amazon-scale tools to facilitate internal business - Build highly available, secure, and low-latency systems - Find out what it takes to engineer systems for "Amazon Scale" - Own and operate the systems that you build based on real-time customer data and demanding service-level agreements A day in the life High-level designs, cross-team alignment, long-term architectural roadmap and technical strategy, understanding the business domain and proposing solutions to address customer and business problems, helping scope and analyze product requirements, mentorship, reviewing and writing high quality code. About the team We are a team of scientists and engineers building AI/ML solutions to make Amazon the Earth’s most trusted shopping destination for safe and compliant products. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Experience with AWS services like SageMaker, EMR, S3, DynamoDB, and EC2 for machine learning, deep learning, NLP, GenAI, distributed training, and model hosting Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
0 years
0 Lacs
Chandigarh, India
On-site
Job Description : AI/ML Specialist Overview We are looking for a highly skilled and experienced AI/ML to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF). Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills And Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. (ref:hirist.tech) Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Salary - 10 to 25 LPA Title : Sr. Data Scientist/ML Engineer (4+ years & above) Required Technical Skillset Language : Python, PySpark Framework : Scikit-learn, TensorFlow, Keras, PyTorch Libraries : NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database : Relational Database(Postgres), NoSQL Database (MongoDB) Cloud : AWS cloud platforms Other Tools : Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes : Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements And Skills Mathematics and Statistics : A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills : Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques : Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis and Visualization : Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks : Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies : A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering : A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication and Collaboration : A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders. (ref:hirist.tech) Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Overview We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving. Key Responsibilities Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems. Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker. Automate ML workflows using CI/CD best practices and tools. Ensure model reproducibility, governance, and performance tracking. Monitor deployed models for data drift, model decay, and performance metrics. Implement robust versioning and model registry systems. Apply security, performance, and compliance best practices across ML systems. Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities. Required Skills & Qualifications 4+ years of experience in Software Engineering or MLOps, preferably in a production environment. Proven experience with AWS services, especially AWS Sagemaker for model development and deployment. Working knowledge of AWS DataZone (preferred). Strong programming skills in Python, with exposure to R, Scala, or Apache Spark. Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes). Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools. Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Solid understanding of DevOps and cloud-native infrastructure practices. Excellent problem-solving skills and the ability to work collaboratively across teams. (ref:hirist.tech) Show more Show less
Posted 1 month ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : Technical Delivery Manager Experience : 12+ years Location : Pune, Kharadi Employment Type : Full-time Job Summary We are seeking a seasoned Technical Delivery Manager with 12+ years of experience to lead and manage large-scale, complex programs. The ideal candidate will have a strong background in project and delivery management, with expertise in Agile methodologies, risk management, stakeholder communication, and cross-functional team leadership. Key Responsibilities Delivery & Execution: Oversee end-to-end project execution, ensuring alignment with business objectives, timelines, and quality standards. Agile & SCRUM Management: Drive Agile project delivery, coordinating across multiple teams and ensuring adherence to best practices. Risk & Dependency Management: Participate in design discussions to identify risks, dependencies, and mitigation strategies. Stakeholder Communication: Report and present program progress to senior management and executive leadership. Client Engagement: Lead customer presentations, articulations, and discussions, ensuring effective communication and alignment. Cross-Team Coordination: Collaborate with globally distributed teams to ensure seamless integration and delivery. Leadership & People Management: Guide, mentor, and motivate diverse teams, fostering a culture of innovation and excellence. Tool & Process Management: Utilize tools like JIRA, Confluence, MPP, and Smartsheet to drive project visibility and efficiency. Engineering & ALM Best Practices: Ensure adherence to engineering and Application Lifecycle Management (ALM) best practices for continuous improvement. Required Skills & Qualifications 12+ years of experience in IT project and delivery management. 5+ years of project management experience, preferably in Managed Services and Fixed-Price engagements. Proven experience in large-scale program implementation. Strong expertise in Agile/SCRUM methodologies and project execution. Excellent problem-solving, analytical, and risk management skills. Outstanding communication, articulation, and presentation skills. Experience in multi-team and cross-functional coordination, especially across different time zones. Good-to-Have Skills Exposure to AWS services (S3, Glue, Lambda, SNS, RDS MySQL, Redshift, Snowflake). Knowledge of Python, Jinja, Angular, APIs, Power BI, SageMaker, Flutter Dart. (ref:hirist.tech) Show more Show less
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a highly skilled Generative AI Developer : Responsibilities We are looking for a highly skilled Generative AI Developer with expertise in Large Language Models (LLMs) to join our AI/ML innovation team. The ideal candidate will be responsible for building, fine-tuning, deploying, and optimizing generative AI models to solve complex real-world problems. You will collaborate with data scientists, machine learning engineers, product managers, and software developers to drive forward next-generation AI-powered Responsibilities : Design and develop AI-powered applications using large language models (LLMs) such as GPT, LLaMA, Mistral, Claude, or similar. Fine-tune pre-trained LLMs for specific tasks (e.g., text summarization, Q&A systems, chatbots, semantic search). Build and integrate LLM-based APIs into products and systems. Optimize inference performance, latency, and throughput of LLMs for deployment at scale. Conduct prompt engineering and design strategies for prompt optimization and output consistency. Develop evaluation frameworks to benchmark model quality, response accuracy, safety, and bias. Manage training data pipelines and ensure data privacy, compliance, and quality standards. Experiment with open-source LLM frameworks and contribute to internal libraries and tools. Collaborate with MLOps teams to automate deployment, CI/CD pipelines, and monitoring of LLM solutions. Stay up to date with state-of-the-art advancements in generative AI, NLP, and foundation Skills Required : LLMs & Transformers: Deep understanding of transformer-based architectures (e.g., GPT, BERT, T5, LLaMA, Falcon). Model Training/Fine-Tuning: Hands-on experience with training/fine-tuning large models using libraries such as Hugging Face Transformers, DeepSpeed, LoRA, PEFT. Prompt Engineering: Expertise in designing, testing, and refining prompts for specific tasks and outcomes. Python: Strong proficiency in Python with experience in ML and NLP libraries. Frameworks: Experience with PyTorch, TensorFlow, Hugging Face, LangChain, or similar frameworks. MLOps: Familiarity with tools like MLflow, Kubeflow, Airflow, or SageMaker for model lifecycle management. Data Handling: Experience with data pipelines, preprocessing, and working with structured and unstructured Desirable Skills : Deployment: Knowledge of deploying LLMs on cloud platforms like AWS, GCP, Azure, or edge devices. Vector Databases: Experience with FAISS, Pinecone, Weaviate, or ChromaDB for semantic search applications. LLM APIs: Experience integrating with APIs like OpenAI, Cohere, Anthropic, Mistral, etc. Containerization: Docker, Kubernetes, and cloud-native services for scalable model deployment. Security & Ethics: Understanding of LLM security, hallucination handling, and responsible AI : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Machine Learning, or related field. 2-4 years of experience in ML/NLP roles with at least 12 years specifically focused on generative AI and LLMs. Prior experience working in a research or product-driven AI team is a plus. Strong communication skills to explain technical concepts and findings Skills : Analytical thinker with a passion for solving complex problems. Team player who thrives in cross-functional settings. Self-driven, curious, and always eager to learn the latest advancements in AI. Ability to work independently and deliver high-quality solutions under tight deadlines. (ref:hirist.tech) Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Technical Project Manager IT management professional with 10+ years of Exp Responsibilities 5+ years of project management experience Delivery executive experience (Managed Services, Fixed prices) Delivery management exp in working on executing projects extensively using SCRUM methodology. Planning, monitoring and risk management for multiple data and data science programs. Participating in design discussions to capture risks & dependencies. Reporting & presenting program progress to senior management Customer presentation, articulations, and communication skills Co-ordinate and integrate with multiple teams in different time zones. Leading, guiding, managing, and motivating diverse team Should have handled large & complex program implementation. Knowledge of working on tools like JIRA, Confluence, MPP, Smartsheet etc. Knowledge of Engineering and ALM best practices Good To Have Knowledge of AWS S3, Glue, Lambda, SNS etc., Python, Jinja, Angular, APIs, PowerBI, Sagemaker, Flutter Dart, RDS MySQL, DB Redshift, Snowflake. (ref:hirist.tech) Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
Remote
Overview Of Job Role We are looking for a skilled and motivated DevOps Engineer to join our growing team. The ideal candidate will have expertise in AWS, CI/CD pipelines, and Terraform, with a passion for building and optimizing scalable, reliable, and secure infrastructure. This role involves close collaboration with development, QA, and operations teams to streamline deployment processes and enhance system & Responsibilities & Strategy : Lead and mentor a team of DevOps engineers, fostering a culture of automation, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business objectives to enhance scalability, security, and reliability. Collaborate with cross-functional teams, including software engineering, security, MLOps, and infrastructure teams, to drive DevOps best practices. Establish KPIs and performance metrics for DevOps operations, ensuring optimal system performance, cost efficiency, and high availability. Advocate for CPU throttling, auto-scaling, and workload optimization strategies to improve system efficiency and reduce costs. Drive MLOps adoption, integrating machine learning workflows into CI/CD pipelines and cloud infrastructure. Ensure compliance with ISO 27001 standards, implementing security controls and risk management measures. & Automation : Oversee the design, implementation, and management of scalable, secure, and resilient infrastructure on AWS. Lead the adoption of Infrastructure as Code (IaC) using Terraform, CloudFormation, and configuration management tools like Ansible or Chef. Spearhead automation efforts for infrastructure provisioning, deployment, and monitoring to reduce manual overhead and improve efficiency. Ensure high availability and disaster recovery strategies, leveraging multi-region architectures and failover mechanisms. Manage Kubernetes (or AWS ECS/EKS) clusters, optimizing container orchestration for large-scale applications. Drive cost optimization initiatives, implementing intelligent cloud resource allocation strategies. & Observability : Architect and oversee CI/CD pipelines, ensuring seamless automation of application builds, testing, and deployments. Enhance observability and monitoring by implementing tools like CloudWatch, Prometheus, Grafana, ELK Stack, or Datadog. Develop robust logging, alerting, and anomaly detection mechanisms to ensure proactive issue resolution. & Compliance (ISO 27001 Implementation) : Lead the implementation and enforcement of ISO 27001 security standards, ensuring compliance with information security policies and regulatory requirements. Develop and maintain an Information Security Management System (ISMS) to align with ISO 27001 guidelines. Implement secure access controls, encryption, IAM policies, and network security measures to safeguard infrastructure. Conduct risk assessments, vulnerability management, and security audits to identify and mitigate threats. Ensure security best practices are embedded into all DevOps workflows, following DevSecOps principles. Work closely with auditors and compliance teams to maintain SOC2, GDPR, and other regulatory frameworks. Skills and Qualifications : 5+ years of experience in DevOps, cloud infrastructure, and automation, with at least 3+ years in a managerial or leadership role. Proven experience managing AWS cloud infrastructure at scale, including EC2, S3, RDS, Lambda, VPC, IAM, and CloudFormation. Expertise in Terraform and Infrastructure as Code (IaC) principles. Strong background in CI/CD pipeline automation with tools like Jenkins, GitHub Actions, GitLab CI, or CircleCI. Hands-on experience with Docker and Kubernetes (or AWS ECS/EKS) for container orchestration. Experience in CPU throttling, auto-scaling, and performance optimization for cloud-based applications. Strong knowledge of Linux/Unix systems, shell scripting, and network configurations. Proven experience with ISO 27001 implementation, ISMS development, and security risk management. Familiarity with MLOps frameworks like Kubeflow, MLflow, or SageMaker, and integrating ML pipelines into DevOps workflows. Deep understanding of observability tools such as ELK Stack, Grafana, Prometheus, or Datadog. Strong stakeholder management, communication, and ability to collaborate across teams. Experience in regulatory compliance, including SOC2, ISO 27001, and GDPR. Attributes : Strong interpersonal and communication skills, being an effective team player, being able to work with individuals at all levels within the organization and building remote relationships. Excellent prioritization skills, the ability to work well under pressure, and the ability to multi-task. Qualification : Any technical degree MTech (CS), BTech (CS) will be preferred. (ref:hirist.tech) Show more Show less
Posted 1 month ago
68.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities Design, develop, and optimize large-scale data pipelines using PySpark and Apache Spark. Build scalable and robust ETL workflows leveraging AWS services such as EMR, S3, Lambda, and Glue. Collaborate with data scientists, analysts, and other engineers to gather requirements and deliver clean, well-structured data solutions. Integrate data from various sources, ensuring high data quality, consistency, and reliability. Manage and schedule workflows using Apache Airflow. Work on ML model deployment pipelines using tools like SageMaker and Anaconda. Write efficient and optimized SQL queries for data processing and validation. Develop and maintain technical documentation for data pipelines and architecture. Participate in Agile ceremonies, sprint planning, and code reviews. Troubleshoot and resolve issues in production environments with minimal supervision. Required Skills And Qualifications Bachelor's or Masters degree in Computer Science, Engineering, or a related field. 68 years of experience in data engineering with a strong focus on : Python PySpark SQL AWS (EMR, EC2, S3, Lambda, Glue) Experience in developing and orchestrating pipelines using Apache Airflow. Familiarity with SageMaker for ML deployment and Anaconda for environment management. Proficiency in working with large datasets and optimizing Spark jobs. Experience in building data lakes and data warehouses on AWS. Strong understanding of data governance, data quality, and data lineage. Excellent documentation and communication skills. Comfortable working in a fast-paced Agile environment. Experience with Kafka or other real-time streaming platforms. Familiarity with DevOps practices and tools (e.g., Terraform, CloudFormation). Exposure to NoSQL databases such as DynamoDB or MongoDB. Knowledge of data security and compliance standards (GDPR, HIPAA). Work with cutting-edge technologies in a collaborative and innovative environment. Opportunity to influence large-scale data infrastructure. Competitive salary, benefits, and professional development support. Be part of a growing team solving real-world data challenges. (ref:hirist.tech) Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Company Description Assent is the leading solution for supply chain sustainability tailored for the world’s top-tier, sustainability-driven manufacturers. Hidden risks riddle supply chains, many of which weren't built with sustainability in mind. That's where we step in. With insights from experts, Assent is the tool manufacturers trust for comprehensive sustainability. We are proud to announce that Assent has crossed the US$100M ARR milestone, granting us Centaur Status. This accomplishment, reached just 8 years following our Series A, makes us the first and only Certified B Corporation in North America's SaaS sustainability industry to celebrate this milestone. Our journey from $5 million to US$100M ARR in just eight years has been marked by significant growth and achievements. With our $350 million US funding led by Vista Equity Partners, we're poised for even greater expansion and are on the lookout for outstanding team members to join our mission. Hybrid Work Model At Assent, we proudly embrace a remote-first work model, valuing the flexibility and autonomy it provides our team. We also acknowledge the intangible benefits of occasional in-person workdays. For team members situated within 50 kms/31 miles of our five global offices in Ottawa, Eldoret, Penang, Columbus, Pune and Amsterdam, you can expect to come into the office one day a week. Similarly, those near our co-working spaces in Nairobi and Toronto are encouraged to work onsite once a month. Job Description We are seeking a Senior Data Scientist with deep expertise in Natural Language Processing (NLP) and Large Language Model (LLM) fine-tuning to join our AI and Machine Learning team. This role is ideal for a highly skilled individual with a PhD or Masters in Machine Learning, AI, or a related field, coupled with industry experience in developing and deploying NLP-driven AI solutions. As a Senior Data Scientist, you will lead the development, tuning, and maintenance of cutting-edge AI models, mentor junior data scientists, and drive innovation in AI-powered solutions. You will collaborate closely with cross-functional teams, transforming complex business challenges into intelligent, data-driven products and solutions. Additionally, you will play a key role in analyzing large-scale datasets, uncovering insights, and ensuring data-driven decision-making in our AI initiatives. The Senior Data Scientist is a data-oriented, out-of-the-box thinker who is passionate about data, machine learning, understanding the business, and driving business value. Lead the research, development, and fine-tuning of state-of-the-art LLMs and NLP models for real-world applications. Perform in-depth data analysis to extract actionable insights, improve model performance, and inform AI strategy. Design, implement, and evaluate LLM based systems to ensure model performance, efficiency, and scalability. Mentor and coach junior data scientists, fostering best practices in NLP, deep learning, and MLOps. Deploy and monitor models in production, ensuring robust performance, fairness, and explainability in AI applications. Stay ahead of advancements in NLP, generative AI, and ethical AI practices, incorporating them into our solutions. Ensure compliance with Responsible AI principles, aligning with industry standards and regulations such as the EU AI Act and Canada’s Voluntary Code of Conduct on Responsible AI. Collaborate with engineering and product teams to integrate AI-driven features into SaaS products. Be curious and not afraid to try unconventional ideas to find solutions to difficult problems Apply engineering principles to proactively identify issues, develop solutions, and recommend improvements Be self-motivated and highly proactive at exploring new technologies Find creative solutions to challenges involving data that is difficult to obtain, complex or ambiguous. Manage multiple concurrent projects, priorities and timelines Qualifications PhD (preferred) or Masters in Machine Learning, AI, NLP, or a related field, with a strong publication record in top-tier conferences and journals. Industry experience (2+ for PhD, 5+ for Masters) in building and deploying NLP/LLM solutions at scale. Proven ability to analyze large datasets, extract meaningful insights, and drive data-informed decision-making. Strong expertise in preparing data for fine-tuning and optimizing LLMs. Solid understanding of data engineering concepts, including data pipelines, feature engineering, and vector databases. Proficiency in deep learning frameworks (e.g., PyTorch, TensorFlow) and NLP libraries (e.g., Hugging Face Transformers, spaCy). Solid working knowledge of AWS systems and services; comfort working with SageMaker, Bedrock, EC2, S3, Lambda, Terraform. Familiar with MLOps best practices, including model versioning, monitoring, and CI/CD pipelines. Excellent organizational skills and ability to manage multiple priorities and timelines Additional Information Life at Assent Wellness: We believe that you and your family’s well being is important. As a result, we offer vacation time that increases with tenure, comprehensive benefits packages (details vary by country), life leave days and more. Financial Benefits: It’s not all about the money – well, it’s a little about the money. We understand that financial health is important and we offer a competitive base salary, a corporate bonus program, retirement savings options and more. Life at Assent: There is purpose beyond your work. We provide our team members with flexible work options, volunteer days and opportunities to get involved in corporate giving initiatives. Lifelong Learning: At Assent, curiosity is not only valued but encouraged. You will receive professional development days that are available to you the day you start. At Assent, we are committed to growing and sustaining an environment where our team members feel included, valued, and heard. Our diversity and equal opportunity practices are guided and championed by our Diversity and Inclusion Working Group and our Employee Resource Groups (ERGs). Our commitment to diversity, equity and inclusion includes recruiting and retaining team members from diverse backgrounds and experiences, and fostering a culture of belonging where all team members are included, treated with dignity and respect, promoted on their merits, and placed in positions to contribute to business success. If you require assistance or accommodation throughout any part of the interview and selection process, please contact talent@assent.com and we will be happy to help. Show more Show less
Posted 1 month ago
1.0 - 3.0 years
4 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in the design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience Masters degree and 1 to 3 years of experience in Computer Science, IT, or related field OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT, or related field OR Diploma and 7 to 9 years of experience in Computer Science, IT, or related field Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (e.g., SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, cloud data platforms. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills.
Posted 1 month ago
0 years
0 Lacs
India
Remote
AI Opportunities with Soul AI’s Expert Community! Are you an MLOps Engineer ready to take your expertise to the next level? Soul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join? - Above market-standard compensation - Contract-based or freelance opportunities (2–12 months) - Work with industry leaders solving real AI challenges - Flexible work locations – Remote | Onsite | Hyderabad/Bangalore Your Role: - Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines - Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) - Automate ML workflows (feature engineering, retraining, deployment) - Scale ML models with Docker, Kubernetes, Airflow - Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure) Must-Have Skills: 1. Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines 2. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML) 3. Expertise in monitoring tools (MLflow, Prometheus, Grafana) 4. Knowledge of distributed data processing (Spark, Kafka) ( Bonus: Experience in A/B testing, canary deployments, serverless ML) Next Steps: 1. Register on Soul AI’s website 2. Get shortlisted & complete screening rounds 3. Join our Expert Community and get matched with top AI projects Don’t just find a job. Build your future in AI with Soul AI! Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
5.0 years
6 - 8 Lacs
Hyderābād
Remote
Your opportunity As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. What you'll do In this role, you'll have the opportunity to contribute significantly to our machine learning initiatives, shaping the future of AI-driven solutions in various domains. If you're passionate about pushing the boundaries of what's possible in machine learning and ready to take on diverse challenges, we encourage you to apply and join us in our journey towards innovation. This role requires Proficiency in software engineering design practices. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, Tensorflow, HuggingFace Transformers, and SpaCy. 5+ years of developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. Bonus points if you have Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities, and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers’ means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy
Posted 1 month ago
4.0 years
11 Lacs
Mohali
On-site
Skill Sets: Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow) Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models Strong experience in NLP, fine-tuning transformer models, and dataset preparation Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI) Experience in containerization (Docker, Kubernetes) and CI/CD pipelines Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning) Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection Roles and Responsibilities: Design and implement end-to-end ML pipelines from data ingestion to production Develop, fine-tune, and optimize ML models, ensuring high performance and scalability Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc) Automate model retraining, monitoring, and drift detection Collaborate with engineering teams for seamless ML integration Mentor junior team members and enforce best practices Job Type: Full-time Pay: Up to ₹1,100,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): How soon can you join us Experience: Total: 4 years (Required) Data Science roles: 3 years (Required) Work Location: In person
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France