Jobs
Interviews

1552 Sagemaker Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Greater Kolkata Area

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Company Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time. Overview: We are looking for a highly motivated and analytical Data Scientist to join our growing data team. You will play a key role in extracting insights from large datasets, building predictive models, and supporting data-driven decision-making across departments. Location: Bangalore Experience : 3+ Key Responsibilities: Collect, process, and analyze large datasets from multiple sources. Build and deploy machine learning models to solve business problems. Design and implement A/B tests and statistical analyses. Collaborate with cross-functional teams (product, engineering, marketing) to define analytics requirements. Communicate complex data insights in a clear and actionable manner to stakeholders. Develop dashboards and visualizations to monitor key metrics. Stay current with the latest trends and technologies in data science and AI. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Mathematics, Statistics, or related field Proven experience (2+ years) as a Data Scientist or Data Analyst Strong knowledge of Python/R and SQL Hands-on experience with machine learning frameworks (e.g., scikit learn, tensroflow, pytorch) Experience with big data tools (e.g., Spark, Hadoop) is a plus Familiarity with data visualization tools Strong analytical, problem-solving, and communication skills Preferred: Experience with cloud platforms preferably AWS (S3, Sagemaker, Airflow etc) Strong SQL skills, with experience in Snowflake, MySQL, and PostgreSQL. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160

Posted 1 week ago

Apply

12.0 years

0 Lacs

Mysore, Karnataka, India

On-site

Job Title : Solution Architect – Application & AI Engineering Experience : 12+ years (Minimum 8 years of hands-on experience) Location : Mysuru, Karnataka Employment Type : Full-time About the Role We are seeking an experienced and forward-thinking Solution Architect with a strong background in application engineering and AI/ML systems. The ideal candidate should have deep technical expertise and hands-on experience in architecting scalable and secure solutions across web, API, database and cloud ecosystems (AWS or Azure). You will lead end-to-end architecture design efforts—transforming business requirements into robust, scalable, and secure digital products, while ensuring modern AI-driven capabilities are leveraged where applicable. Key Responsibilities Design and deliver scalable application architectures across microservices, APIs and backend databases. Collaborate with cross-functional teams to define solution blueprints combining application engineering and AI/ML requirements. Architect and lead implementation strategies for deploying applications on AWS or Azure using services such as ECS, AKS, Lambda, API Gateway, Azure App Services, Cosmos DB, etc. Guide engineering teams in application modernization, including monolith to microservices transitions, containerization and serverless. Define and enforce best practices around security, performance, and maintainability of solutions. Integrate AI/ML solutions (e.g., inference endpoints, custom LLMs, or ML Ops pipelines) within broader enterprise applications. Evaluate and recommend third-party tools, frameworks, or platforms for optimizing application performance and AI integration. Support pre-sales activities and client engagements with architectural diagrams, PoCs, and strategy sessions. Mentor engineering teams and participate in code/design reviews when necessary. Required Skills & Experience 12+ years of total experience in software/application engineering. 8+ years of hands-on experience in designing and developing distributed applications. Strong knowledge in backend technologies like Python, Node.js, or .NET; and API-first design (REST/GraphQL). Strong understanding of relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.). Experience with DevOps practices, CI/CD pipelines, and infrastructure as code (Terraform, CloudFormation, etc.). Proven experience in architecting and deploying cloud-native applications on AWS and/or Azure. Experience with integrating AI/ML models into production systems, including data pipelines, model inference, and MLOps. Deep understanding of security, authentication (OAuth, JWT), and compliance in cloud-based applications. Familiarity with LLMs, NLP, or generative AI is a strong advantage. Preferred Qualifications Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Exposure to AI/ML platforms like Azure AI Studio, Amazon Bedrock, SageMaker, or Hugging Face. Understanding of multi-tenant architecture and SaaS platforms. Experience working in Agile/DevOps teams and with tools like Jira, Confluence, GitHub/GitLab, etc. Why Join Us? Work on innovative and enterprise-scale AI-powered applications. Influence product and architecture decisions with a long-term strategic lens. Collaborate with forward-thinking and cross-disciplinary teams. Opportunity to lead from the front and shape the engineering roadmap.

Posted 1 week ago

Apply

0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving And GPU Architecture Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning And Optimization Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models And Use Cases Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps And LLMOps Proficiency Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Email : diksha.singh@aptita.com

Posted 1 week ago

Apply

2.0 years

0 Lacs

Surat, Gujarat, India

Remote

About devx At devx, we help some of India’s most forward-looking brands unlock growth through AI-powered and cloud-native solutions — in collaboration with AWS. We’re a fast-growing consultancy focused on solving real-world business problems with cutting-edge technology. Role Overview We are looking for a dynamic and customer-centric AWS Solutions Architect to join our team. In this role, you’ll work directly with clients to design scalable, secure, and cost-effective cloud architectures that solve high-impact business challenges. You’ll bridge the gap between business needs and technical execution, becoming a trusted advisor to our clients. Key Responsibilities Engage with clients to understand their business objectives and translate them into cloud-based architectural solutions Design, implement, and document AWS architectures with a strong focus on scalability, security, and performance Create solution blueprints and work closely with engineering teams to ensure successful implementation Conduct workshops, presentations, and technical deep-dives with client teams Stay updated on the latest AWS offerings and best practices, and incorporate them into solution designs. Collaborate with cross-functional teams including sales, product, and engineering to deliver end-to-end solutions. What we are looking for: 2+ years of experience designing and implementing solutions on AWS Strong understanding of core AWS services such as EC2, S3, Lambda, RDS, API Gateway, IAM, VPC, etc. Strong understanding of core AI/ML and Data services such as Bedrock, Sagemaker, Glue, Athena, Kinesis, etc. Strong understanding of core DevOps services such as ECS, EKS, CI/CD Pipeline, Fargate, Lambda etc. Excellent communication and presentation skills in English — both written and verbal Comfortable in client-facing roles, with the ability to lead technical discussions and build credibility with stakeholders Ability to balance technical depth with business context and articulate value to decision-makers Location: Surat, Gujarat No WFH Only apply if you are open to relocate Surat, Gujarat

Posted 1 week ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal

On-site

Kolkata,West Bengal,India +1 more Job ID 770160 Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Location: Remote / Hybrid Experience: 2–6 years (or strong project/internship experience) Employment Type: Full-Time Department: AI & Software Systems Key Responsibilities Design and maintain end-to-end MLOps pipelines: from data ingestion to model deployment and monitoring. Containerize ML models and services using Docker for scalable deployment. Develop and deploy APIs using FastAPI to serve real-time inference for object detection, segmentation, and mapping tasks. Automate workflows using CI/CD tools like GitHub Actions or Jenkins. Manage cloud infrastructure on AWS: EC2, S3, Lambda, SageMaker, CloudWatch, etc. Collaborate with AI and GIS teams to integrate ML outputs into mapping dashboards. Implement model versioning using DVC/Git, and maintain structured experiment tracking using MLflow or Weights & Biases. Ensure secure, scalable, and cost-efficient model hosting and API access. Required Skills Programming: Python (must), Bash/Shell scripting ML Frameworks: PyTorch, TensorFlow, OpenCV MLOps Tools: MLflow, DVC, GitHub Actions, Docker (must), Kubernetes (preferred) Cloud Platforms: AWS (EC2, S3, SageMaker, IAM, Lambda) API Development: FastAPI (must), Flask (optional) Data Handling: NumPy, Pandas, GDAL, Rasterio Monitoring: Prometheus, Grafana, AWS CloudWatch Preferred Experience Hands-on with AI/ML models for image segmentation, object detection (YOLOv8, U-Net, Mask R-CNN). Experience with geospatial datasets (satellite imagery, drone footage, LiDAR). Familiarity with PostGIS, QGIS, or spatial database management. Exposure to DevOps principles and container orchestration (Kubernetes/EKS). Soft Skills Problem-solving mindset with a system design approach. Clear communication across AI, software, and domain teams. Ownership of the full AI deployment lifecycle. Education Bachelor’s or Master’s in Computer Science, Data Science, AI, or equivalent. Certifications in AWS, MLOps, or Docker/Kubernetes (bonus).

Posted 1 week ago

Apply

8.0 - 11.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Roles & Responsibilities Key Responsibilities Design, develop, and optimize Machine Learning & Deep Learning models using Python and libraries such as TensorFlow, PyTorch, and Scikit-learn Work with Large Language Models (e.g., GPT, BERT, T5) to solve NLP tasks such as, semantic search, summarization, chatbots, conversational agents, and document intelligence. Lead the development of scalable AI solution including data preprocessing, embedding generation, vector search, and prompt orchestration. Build and manage vector databases and metadata stores to support high-performance semantic retrieval and contextual memory. Implement caching, queuing, and background processing systems to ensure performance and reliability at scale. Conduct independent R&D to implement cutting-edge AI methodologies, evaluate open-source innovations, and prototype experimental solutions Apply predictive analytics and statistical techniques to mine actionable insights from structured and unstructured data. Build and maintain robust data pipelines and infrastructure for end-to-end ML model training, testing, and deployment Collaborate with cross-functional teams to integrate AI solutions into business processes Contribute to the MLOps lifecycle, including model versioning, CI/CD, performance monitoring, retraining strategies, and deployment automation Stay updated with the latest developments in AI/ML by reading academic papers, and experimenting with novel tools or frameworks Required Skills & Qualifications Proficient in Python, with hands-on experience in key ML libraries: TensorFlow, PyTorch, Scikit-learn, and HuggingFace Transformers Strong understanding of machine learning fundamentals, deep learning architectures (CNNs, RNNs, transformers), and statistical modeling Practical experience working with and fine-tuning LLMs and foundation models Deep understanding of vector search, embeddings, and semantic retrieval techniques. Expertise in predictive modeling, including regression, classification, time series, clustering, and anomaly detection Comfortable working with large-scale datasets using Pandas, NumPy, SciPy etc. Experience with cloud platforms (AWS, GCP, or Azure) for training and deployment is a plus Preferred Qualifications Master’s or Ph.D. in Computer Science, Machine Learning, Data Science, or related technical discipline. Experience with MLOps tools and workflows (e.g., Docker, Kubernetes, MLflow, SageMaker, Vertex AI). Ability to build and expose APIs for models using FastAPI, Flask, or similar frameworks. Familiarity with data visualization (Matplotlib, Seaborn) and dashboarding (Plotly) tools or equivalent Working knowledge of version control, experiment tracking, and team collaboration Experience 8-11 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): TensorFlow, NLP, Pytorch, Large Language Models (LLM) About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.

Posted 1 week ago

Apply

58.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. Responsibilities This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Responsibilities : Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle: data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58 years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc. Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech)

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly strong analytical background to work in our Analytics Consulting practice Senior Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 4+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCP’s Vertex AI platform, AWS SageMaker) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 1 week ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly strong analytical background to work in our Analytics Consulting practice Senior Associate’s will work as an integral part of business analytics teams in India alongside clients and consultants in the U.S., leading teams for high-end analytics consulting engagements and providing business recommendations to project teams. Years of Experience: Candidates with 4+ years of hands on experience Must Have Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCP’s Vertex AI platform, AWS SageMaker) Knowledge of predictive/prescriptive analytics, especially on usage of Log-Log, Log-Linear, Bayesian Regression technques and including Machine Learning algorithms (Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks Good knowledge of statistics For e.g: statistical tests & distributions Experience in Data analysis For e.g: data cleansing, standardization and data preparation for the machine learning use cases Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) Advanced level programming in SQL or Python/Pyspark Expertise with visualization tools For e.g: Tableau, PowerBI, AWS QuickSight etc. Nice To Have Working knowledge of containerization ( e.g. AWS EKS, Kubernetes), Dockers and data pipeline orchestration (e.g. Airflow) Good Communication and presentation skills Roles And Responsibilities Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Location : Preferred: Ahmedabad, Gandhinagar, Hybrid (Can consider Remote case to case). Experience : 8+ Years (with hands-on AI/ML architecture experience). Education : Ph. or Master's in Computer Science, Data Science, Artificial Intelligence, or related fields. Job Summary We are seeking an experienced AI/ML Architect with a strong academic background and industry experience to lead the design and implementation of AI/ML solutions across diverse industry domains. The ideal candidate will act as a trusted advisor to clients, understanding their business problems, and crafting scalable AI/ML strategies and solutions aligned to their vision. Key Responsibilities Engage with enterprise customers and stakeholders to gather business requirements, problem statements, and aspirations. Translate business challenges into scalable and effective AI/ML-driven solutions and architectures. Develop AI/ML adoption strategies tailored to customer maturity, use cases, and ROI potential. Design end-to-end ML pipelines and architecture (data ingestion, processing, model training, deployment, and monitoring). Collaborate with data engineers, scientists, and business SMEs to build and operationalize AI/ML solutions. Present technical and strategic insights to both technical and non-technical audiences, including executives. Lead POCs, pilots, and full-scale implementations. Stay updated on the latest research, technologies, tools, and trends in AI/ML and integrate them into customer solutions. Contribute to proposal development, technical documentation, and pre-sales engagements. Required Qualifications 8+ years of experience in the AI/ML field, with a strong background in solution architecture. Deep knowledge of machine learning algorithms, NLP, computer vision, deep learning frameworks (TensorFlow, PyTorch, etc.) Experience with cloud AI/ML services (AWS SageMaker, Azure ML, GCP Vertex AI, etc.) Strong communication and stakeholder management skills. Proven track record of working directly with clients to understand business needs and deliver AI solutions. Familiarity with MLOps practices and tools (Kubeflow, MLflow, Airflow, etc. Preferred Skills Experience in building GenAI or Agentic AI applications. Knowledge of data governance, ethics in AI, and explainable AI. Ability to lead cross-functional teams and mentor junior data scientists/engineers. Publications or contributions to AI/ML research communities (preferred but not mandatory). (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 8.0 years

20 - 30 Lacs

Noida, Hyderabad

Hybrid

We're Hiring | Machine Learning Engineer | 3-6 Years Experience | Noida / Hyderabad (Hybrid) Are you an experienced ML Engineer ready to hit the ground running? Xebia is hiring for a Machine Learning Engineer role immediate joiners or candidates with 2 weeks notice only. What We're Looking For: Proven experience with AWS Machine Learning Services (e.g., SageMaker, AWS ML Suite) Solid understanding of ML model lifecycle , evaluation techniques & real-world deployment Proficiency in Python and libraries such as Pandas, NumPy, Scikit-learn Experience with SaaS-based ML platforms (like Dataiku, Indico, H2O.ai or similar) Working knowledge of AWS Data Engineering tools S3, Glue, Athena, Lambda Strong track record of delivering end-to-end ML solutions Location: Noida or Hyderabad Hybrid 3 days per week from office Note: Apply only if you are an Immediate Joiner or can join within 2 weeks. To Apply: Send your updated CV along with the below details to: vijay.s@xebia.com Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Noida / Hyderabad) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile URL #Xebia #HiringNow #MachineLearning #AWSJobs #SageMaker #DataScience #Python #HybridJobs #ImmediateJoinersOnly #MLJobs #NoidaJobs #HyderabadJobs

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction To Role Are you ready to be part of the future of healthcare? Can you think big, be bold, and harness the power of digital and AI to tackle longstanding life sciences challenges? Then Evinova, a new healthtech business within the AstraZeneca Group, might be for you! Transform billions of patients’ lives through technology, data, and cutting-edge ways of working. You’re disruptive, decisive, and transformative—someone who’s excited to use technology to improve patients’ health. We’re building Evinova, a fully-owned subsidiary of AstraZeneca Group, to deliver market-leading digital health solutions that are science-based, evidence-led, and human experience-driven. Smart risks and quick decisions come together to accelerate innovation across the life sciences sector. Be part of a diverse team that pushes the boundaries of science by digitally empowering a deeper understanding of the patients we’re helping. Launch game-changing digital solutions that improve the patient experience and deliver better health outcomes. Together, we have the opportunity to combine deep scientific expertise with digital and artificial intelligence to serve the wider healthcare community and create new standards across the sector. Accountabilities The Machine Learning and Artificial Intelligence Operations team (ML/AI Ops) is newly formed to spearhead the design, creation, and operational excellence of our entire ML/AI data and computational AWS ecosystem to catalyze and accelerate science-led innovations. This team is responsible for the design, implementation, deployment, health, and performance of all algorithms, models, ML/AI operations (MLOps, AIOps, and LLMOps), and Data Science Platform. We manage ML/AI and broader cloud resources, automating operations through infrastructure-as-code and CI/CD pipelines, ensuring best-in-class operations—striving to push beyond mere compliance with industry standards such as Good Clinical Practices (GCP) and Good Machine Learning Practice (GMLP). As a ML/AI Operations Engineer for clinical trial design, planning, and operational optimization on our team, you will lead the development and management of MLOps systems for our trial management and optimization SaaS product. You will collaborate closely with data scientists to transition projects from embryonic research into production-grade AI capabilities, utilizing advanced tools and frameworks to optimize model deployment, governance, and infrastructure performance. This position requires a deep understanding of cloud-native ML/AI Ops methodologies and technologies, AWS infrastructure, and the unique demands of regulated industries, making it a cornerstone of our success in delivering impactful solutions to the pharmaceutical industry. Role & Team Key Responsibilities Operational Excellence Lead by example in creating high-performance, mission-focused and interdisciplinary teams/culture founded on trust, mutual respect, growth mindsets, and an obsession for building extraordinary products with extraordinary people. Drive the creation of proactive capability and process enhancements that ensures enduring value creation and analytic compounding interest. Design and implement resilient cloud ML/AI operational capabilities to maximize our system A-bilities (Learnability, Flexibility, Extendibility, Interoperability, Scalability). Drive precision and systemic cost efficiency, optimized system performance, and risk mitigation with a data-driven strategy, comprehensive analytics, and predictive capabilities at the tree-and-forest level of our ML/AI systems, workloads and processes. ML/AI Cloud Operations and Engineering Develop and manage MLOps/AIOps/LLMOps systems for clinical trial design, planning and operational optimization. Partner closely with data scientists to shepherd projects from embryonic research stages into production-grade ML/AI capabilities. Leverage and teach modern tools, libraries, frameworks and best practices to design, validate, deploy and monitor data pipelines and models in production (examples include, but are not limited to AWS Sagemaker, MLflow, CML, Airflow, DVC, Weights and Biases, FastAPI, Litserve, Deepchecks, Evidently, Fiddler, Manifold). Establish systems and protocols for entire model development lifecycle across a diverse set of algorithms, conventional statistical models, ML and AI/GenAI models to ensure best-in-class Machine Learning Practice (MLP). Enhance system scalability, reliability, and performance through effective infrastructure and process management. Ensure that any prediction we make is backed by deep exploratory data analysis and evidence, interpretable, explainable, safe, and actionable. Personal Attributes Customer-obsessed and passionate about building products that solve real-world problems. Highly organized and detail-oriented, with the ability to manage multiple initiatives and deadlines. Collaborative and inclusive, fostering a positive team culture where creativity and innovation thrive. Essential Skills/Experience Deep understanding of the Data Science Lifecycle (DSLC) and the ability to shepherd data science projects from inception to production within the platform architecture. Expert in MLflow, SageMaker, Kubeflow or Argo, DVC, Weights and Biases, and other relevant platforms. Strong software engineering abilities in Python/JavaScript/TypeScript. Expert in AWS services and containerization technologies like Docker and Kubernetes. Experience with LLMOps frameworks such as LlamaIndex and LangChain. Ability to collaborate effectively with engineering, design, product, and science teams. Strong written and verbal communication skills for reporting and documentation. Minimum of 4 years in ML/AI operations engineering roles. Proven track record of deploying algorithms and machine learning models into production environments. Demonstrated ability to work closely with cross-functional teams, particularly data scientists. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is where creativity meets critical thinking! We embrace technology to reimagine healthcare's future by predicting, preventing, and treating conditions more effectively. Our inclusive approach fosters collaboration internally and externally to share diverse perspectives. We empower our teams with trust and space to explore innovative solutions that redefine patient experiences across their journey. Join us as we drive change that benefits both business and patients. Ready to make an impact? Apply now to join our journey towards transforming healthcare! Date Posted 18-Jul-2025 Closing Date 31-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 2 weeks ago

Apply

14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description This is a unique opportunity to apply your skills and contribute to impactful global business initiatives. As an Applied AI ML Lead - Data Scientist- Vice President at JPMorgan Chase within the Commercial & Investment Bank's Global Banking team, you’ll leverage your technical expertise and leadership abilities to support AI innovation. You should have deep knowledge of AI/ML and effective leadership to inspire the team, align cross-functional stakeholders, engage senior leadership, and drive business results. Job Responsibilities Lead a local AI/ML team with accountability and engagement into a global organization. Mentor and guide team members, fostering an inclusive culture with a growth mindset. Collaborate on setting the technical vision and executing strategic roadmaps to drive AI innovation. Deliver AI/ML projects through our ML development life cycle using Agile methodology. Help transform business requirements into AI/ML specifications, define milestones, and ensure timely delivery. Work with product and business teams to define goals and roadmaps. Maintain alignment with cross-functional stakeholders. Exercise sound technical judgment, anticipate bottlenecks, escalate effectively, and balance business needs versus technical constraints. Design experiments, establish mathematical intuitions, implement algorithms, execute test cases, validate results and productionize highly performant, scalable, trustworthy and often explainable solution. Mentor Junior Machine Learning associates in delivering successful projects and building successful career in the firm. Participate and contribute back to firmwide Machine Learning communities through patenting, publications and speaking engagements. Evaluate and design effective processes and systems to facilitate communication, improve execution, and ensure accountability. Required Qualifications, Capabilities, And Skills 14+ years (BS) or 8+ (MS) or 5+ (PhD) years of relevant in Computer Science, Data Science, Information Systems, Statistics, Mathematics or equivalent experience. Track record of managing AI/ML or software development teams. Experience as a hands-on practitioner developing production AI/ML solutions. Deep knowledge and experience in machine learning and artificial intelligence. Ability to set teams up for success in speed and quality, and design effective metrics and hypotheses. Expert in at least one of the following areas: Natural Language Processing, Knowledge Graph, Computer Vision, Speech Recognition, Reinforcement Learning, Ranking and Recommendation, or Time Series Analysis. Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Demonstrated expertise in machine learning frameworks: Tensorflow, Pytorch, pyG, Keras, MXNet, Scikit-Learn. Strong programming knowledge of python, spark; Strong grasp on vector operations using numpy, scipy; Strong grasp on distributed computation using Multithreading, Multi GPUs, Dask, Ray, Polars etc. Familiarity in AWS Cloud services such as EMR, Sagemaker etc., Strong people management and team-building skills. Ability to coach and grow talent, foster a healthy engineering culture, and attract/retain talent. Ability to build a diverse, inclusive, and high-performing team. Ability to inspire collaboration among teams composed of both technical and non-technical members. Effective communication, solid negotiation skills, and strong leadership. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan’s Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

India

On-site

🚀 We're Hiring | Machine Learning Engineer | 3–6 Years Experience | Noida / Hyderabad (Hybrid) Are you an experienced ML Engineer ready to hit the ground running? Xebia is hiring for a Machine Learning Engineer role – immediate joiners or candidates with ≤ 2 weeks notice only. 🔍 What We're Looking For: Proven experience with AWS Machine Learning Services (e.g., SageMaker, AWS ML Suite) Solid understanding of ML model lifecycle , evaluation techniques & real-world deployment Proficiency in Python and libraries such as Pandas, NumPy, Scikit-learn Experience with SaaS-based ML platforms (like Dataiku, Indico, H2O.ai or similar) Working knowledge of AWS Data Engineering tools – S3, Glue, Athena, Lambda Strong track record of delivering end-to-end ML solutions 📍 Location: Noida or Hyderabad Hybrid – 3 days per week from office ⚠️ Note: Apply only if you are an Immediate Joiner or can join within 2 weeks. 📩 To Apply: Send your updated CV along with the below details to: 📧 vijay.s@xebia.com Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Noida / Hyderabad) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile URL #Xebia #HiringNow #MachineLearning #AWSJobs #SageMaker #DataScience #Python #HybridJobs #ImmediateJoinersOnly #MLJobs #NoidaJobs #HyderabadJobs

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description Introduction: A Career at HARMAN - Harman Tech Solutions (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN HTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs. Empower the company to create new digital business models, enter new markets, and improve customer experiences About The Role You will be responsible for driving the strategic direction of our AI and machine learning practice along with other key leaders. This role will involve leading internal AI/ML projects, shaping the technology roadmap, and overseeing client-facing projects including solution development, RFPs, presentations, analyst interactions, partnership development etc. The ideal candidate will have a strong technical background in AI/ML, exceptional leadership skills, and the ability to balance internal and external project demands effectively. In this strategic role, you will be responsible for shaping the future of AI/ML within our organization, driving innovation, and ensuring the successful implementation of AI/ML solutions that deliver tangible business outcomes. What You Will Do Drive Innovation, Differentiation & Growth Develop and implement a comprehensive AI/ML strategy aligned with our business goals and objectives. Ownership on growth of the COE and influencing client revenues through AI practice Identify and prioritize high-impact opportunities for applying AI/ML across various business units, departments and functions. Lead the selection, deployment, and management of AI/ML tools, platforms, and infrastructure. Oversee the design, development, and deployment of AI/ML solutions Define, differentiate & strategize new AI/ML services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around AI/ML Network with domain experts Develop and implement ethical AI practices and governance standards. Monitor and measure the performance of AI/ML initiatives, demonstrating ROI through cost savings, efficiency gains, and improved business outcomes. Oversee the development, training, and deployment of AI/ML models and solutions. Collaborate with client teams to understand their business challenges and needs. Develop and propose AI/ML solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders. Build re-usable Methodologies, Pipelines & Models Create data pipelines for more efficient and repeatable data science projects Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others Coding knowledge and experience in languages including R, Python, Scala, MATLAB, etc. Experience with popular databases including SQL, MongoDB and Cassandra Experience data discovery/analysis platforms such as KNIME, RapidMiner, Alteryx, Dataiku, H2O, Microsoft AzureML, Amazon SageMaker etc. Expertise in solving problems related to computer vision, text analytics, predictive analytics, optimization, social network analysis etc. Experience with regression, random forest, boosting, trees, hierarchical clustering, transformers, convolutional neural network (CNN), recurrent neural network (RNN), graph analysis, etc. People & Interpersonal Skills Build and manage a high-performing team of AI/ML engineers, data scientists, and other specialists. Foster a culture of innovation and collaboration within the AI/ML team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the AI/ML team Collaborate with other directors, managers, and stakeholders across the company to align the AI/ML vision and goals Communicate and present the AI/ML capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the AI/ML domain What You Need 12+ years of experience in the information technology industry with strong focus on AI/ML having led, driven and set up an AI/ML practice in IT services or niche AI/ML organizations 10+ years of relevant experience in successfully launching, planning, and executing advanced data science projects. A master’s or PhD degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. In depth specialization in text analytics, image recognition, graph analysis, deep learning, is required. The candidate should be adept in agile methodologies and well-versed in applying MLOps methods to the construction of ML pipelines. Candidate should have demonstrated the ability to manage data science projects and diverse teams. Should have experience in creating AI/ML strategies & services, and scale capabilities from a technology, platform, and people standpoint. Experience in working on proposals, presales activities, business development and overlooking delivery of AI/ML projects Experience in building solutions with AI/ML elements in any one or more domains – Industrial, Healthcare, Retail, Communication Be an accelerator to grow the practice through technologies, capabilities, and teams both organically as well as inorganically What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.) Professional development opportunities through HARMAN University’s business and leadership academies. Flexible work schedule with a culture encouraging work life integration and collaboration in a global environment. An inclusive and diverse work environment that fosters and encourages professional and personal development. Tuition reimbursement. “Be Brilliant” employee recognition and rewards program. What Makes You Eligible Be willing to travel up to 25%, domestic and international travel if required. Successfully complete a background investigation as a condition of employment You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today!

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Location: Vadodara Type: Full-time / Internship Duration (for interns): Minimum 3 months Stipend/CTC: Based on experience and role About Gururo Gururo is an industry leader in practical, career-transforming education. With a mission to empower professionals and students through real-world skills, we specialize in project management, leadership development, and emerging technologies. Join our fast-paced, mission-driven team and work on AI/ML-powered platforms that impact thousands globally. Who Can Apply? Interns: Final-year students or recent graduates from Computer Science, Data Science, or related fields, with a strong passion for AI/ML. Freshers: 0–1 years of experience with academic or internship exposure to machine learning projects. Experienced Professionals: 1+ years of hands-on experience in AI/ML roles with a demonstrated portfolio or GitHub contributions. Key Responsibilities Design and develop machine learning models and AI systems for real-world applications Clean, preprocess, and analyze large datasets using Python and relevant libraries Build and deploy ML pipelines using tools like Scikit-learn, TensorFlow, PyTorch Work on NLP, Computer Vision, or Recommendation Systems based on project needs Evaluate models with appropriate metrics and fine-tune for performance Collaborate with product, engineering, and design teams to integrate AI into platforms Maintain documentation, participate in model audits, and ensure ethical AI practices Use version control (Git), cloud deployment (AWS, GCP), and experiment tracking tools (MLflow, Weights & Biases) Must-Have Skills Strong Python programming skills Hands-on experience with one or more ML frameworks (Scikit-learn, TensorFlow, or PyTorch) Good understanding of core ML algorithms (classification, regression, clustering, etc.) Familiarity with data wrangling libraries (Pandas, NumPy) and visualization (Matplotlib, Seaborn) Experience working with Jupyter Notebooks and version control (Git) Basic understanding of model evaluation techniques and metrics Good to Have (Optional) Exposure to deep learning, NLP (transformers, BERT), or computer vision (OpenCV, CNNs) Experience with cloud ML platforms (AWS SageMaker, GCP AI Platform, etc.) Familiarity with Docker, APIs, and ML model deployment workflows Knowledge of MLOps tools and CI/CD for AI systems Kaggle profile, published papers, or open-source contributions What You’ll Gain Work on real-world AI/ML problems in the fast-growing EdTech space Learn from senior data scientists and engineers in a mentorship-driven environment Certificate of Internship/Experience & Letter of Recommendation (for interns) Opportunities to lead research-driven AI initiatives at scale Flexible work hours and performance-based growth opportunities End-to-end exposure — from data collection to model deployment in production

Posted 2 weeks ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: • Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. • Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. • Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. • Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. • Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. • Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. • Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. • Key Skills & Technology Areas: • AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. • LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. • Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. • Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. • Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. • Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) • Observability Integration: Splunk, ELK Stack, Prometheus. • DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. • Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: • Proven ability to work independently and collaboratively in agile, innovation-driven teams. • Strong problem-solving mindset and product-oriented thinking. • Excellent communication and technical storytelling skills. • Flexibility to work across multiple opportunities based on business priorities. • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 weeks ago

Apply

5.0 years

5 - 7 Lacs

Cochin

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognitive Computing Engineer Required Skills 5+ years of Experience in AWS Cognitive Services (Relevant) Experience in AWS Apps development and Deployments. Ability to evaluate cloud application requirements and make architectural recommendations for implementation, deployment, and provisioning applications on AWS Familiarity with AWS CLI, AWS APIs, AWS CloudFormation templates, the AWS Billing Console, and the AWS Management Console AWS SageMaker: End-to-end ML lifecycle management. AWS Lambda & Step Functions: For serverless cognitive workflows. Amazon Comprehend, Rekognition, and Transcribe: For text, image, and speech analysis. IAM & Security: Deep knowledge of AWS Identity and Access Management, encryption (KMS), and compliance. Strong Understanding of Bedrock and Integrating with models Good Experience in React Based Front end Development and Integrations. JSX & Components: Writing reusable functional and class components. Hooks: Deep understanding of useState, useEffect, useContext, useReducer, and custom hooks. State Management: Using Context API, Redux Toolkit, Zustand, or Recoil. Soft Skills Excellent Communication Skills Team Player Self-starter and highly motivated Ability to handle high pressure and fast paced situations Excellent presentation skills Ability to work with globally distributed teams Roles and Responsibilities: Understand existing application architecture and solution design Design individual components and develop the components Work with other architects, leads, team members in an agile scrum environment Hands on development Design and develop applications that can be hosted on Azure cloud Design and develop framework and core functionality Identify the gaps and come up with working solutions Understand enterprise application design framework and processes Lead or Mentor junior and/or mid-level developers Review code and establish best practices Look out for latest technologies and match up with EY use case and solve business problems efficiently Ability to look at the big picture Proven experience in designing highly secured and scalable web applications on Azure cloud Keep management up to date with the progress Work under Agile design, development framework Good hands on development experience required EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

4 - 7 Lacs

Raipur

On-site

Internship Raipur, Bengaluru Posted 2 months ago SUBHAG HEALTHTECH PVT LTD SUBHAG® HealthTech has developed a cutting-edge medical device that enables couples to perform Intra Uterine Insemination (IUI) discreetly in a home environment. This innovative solution aims to address male infertility issues without the need to visit a doctor, expanding infertility treatment through home-based IUI and disrupting the IVF market. With a focus on revolutionizing infertility treatment in India, SUBHAG® HealthTech is dedicated to providing comfortable and effective medical solutions to couples facing fertility challenges. Job Description Essential Responsibilities Create, analyze and maintain explanatory/predictive Data Science models using healthcare data in the Infertility segment Work with real-world case studies in Data Science and a chance to implement various modeling techniques Get real-life experience of working with big data in the digital marketing sphere. Opportunity to independently execute and lead analytical projects and assignments Help solve some challenging Healthcare related digital marketing problems globally. Transform business question into data requirements; collect and merge the data; analyse the data, link it to the business reality and present the results Develop predictive models and machine learning algorithms to study the change in physician prescribing behaviour, as well as Subhag Healthtech, integrated campaign response behaviour. Build analysis to understand user engagement and behaviors across various Subhag Healthtech products. Build expertise in data preparation, data visualisations and transformations through SAS, R, Tableau and other analytical tools. Required Qualifications Must have: B. Tech/B.E. / MSc Computer science/ Statistics/other relevant specialization from premier institutes 5 -7 years’ experience explicitly in Data Analytics – Design, Architecture and the software development lifecycle. Expertise in AWS compute, storage & Big data services EMR, Spark, Redshift, S3, Glue, Step Functions, Lambda serverless, EKS, SageMaker, Dynamo DB etc. Experience in one or more ETL like Informatica Experience in Data warehousing, Data & Dimensional modeling and concepts. Experience in one or more data visualization tool like Tableau, Spotfire etc. Good understanding of enterprise source systems ERP, CRM-etc. Experience in Python or Core Java, Java Web Services development with REST APIs Demonstrated ability to translate user needs and requirements into design alternatives Hands-on experience in Natural Language Processing and Deep Learning projects Experience in healthcare sector Requirement Knowledge of existing and potential clients and ensuring business growth opportunities in relation to company’s strategic plans. Well connected with Pharma/ Surgical Equipment Distributors, Hospitals, Doctors in the region Excellent knowledge of MS Office and using the Internet Familiarity with CRM practices along with ability to build productive business professional relationships Highly motivated and target driven with a proven track record in marketing Excellent marketing and communication skills Prioritizing, time management and organizational skills Relationship management skills and openness to feedback. To apply for this job email your details to hr@subhag.in

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

On-site

Job Title: AI/ML Engineer – Predictive Maintenance & Industrial Data Analytics Company: Crest Climbers Software Solutions Pvt. Ltd. Location: Kodambakkam, Chennai Job Objective: We are looking for a highly skilled AI/ML Engineer with experience in Industrial Automation to build and optimize predictive maintenance solutions using time-series data from sensors, SCADA, DCS, and process historians. You will design models that can forecast asset failures and recommend corrective actions using AI/ML techniques. Key Responsibilities: Design and develop predictive models using sensor, SCADA/DCS, and historian data. Build machine learning pipelines for anomaly detection, failure prediction, and root cause analysis. Integrate with OPC-UA, MQTT, Modbus, and other industrial protocols to collect and process real-time data. Develop dashboards to visualize equipment health, early warning signals, and prescriptive actions. Leverage libraries of failure modes for rotating/static equipment (Pumps, Compressors, Fans, Turbines, etc.). Work with domain experts and mechanical engineers to fine-tune models for practical deployment. Conduct POC and deploy scalable AI/ML systems on edge, cloud, or hybrid infrastructure. Required Skills & Experience: 3–6 years of relevant experience in AI/ML with industrial or manufacturing data. Strong in Python , Scikit-learn , TensorFlow/PyTorch , Pandas , NumPy Hands-on experience with time-series analysis , anomaly detection, classification, regression models. Exposure to Edge AI , real-time data pipelines , or IIoT platforms (AWS IoT, Azure IoT, Kepware, Ignition, etc.) Familiarity with industrial protocols: OPC-UA , Modbus , MQTT , or historian APIs. Experience with Dashboards – Power BI, Grafana, or custom web dashboards. Ability to translate mechanical/maintenance concepts into data science problems. Strong understanding of predictive maintenance KPIs (MTBF, MTTR, RUL, OEE, etc.) Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science , Data Science , Electrical/Mechanical Engineering , or related field. Knowledge of vibration analysis , thermography , motor current signature analysis is a plus. Certification in Industrial AI / Azure ML / AWS Sagemaker / IBM Maximo or equivalent tools is advantageous. Nice to Have: Experience working with tools like InfluxDB , TimescaleDB , Kafka , or Apache Spark . Familiarity with CMMS or EAM systems (SAP PM, IBM Maximo, etc.) Ability to work with cross-functional teams (Operations, Instrumentation, Maintenance) Contact US Email: careers@crestclimbers.com Phone: +91 94453 30496 Website: www.crestclimbers.com Office: Kodambakkam, Chennai Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chennai

On-site

Job Description: Key Responsibilities Develop, train, and evaluate machine learning models using Python, scikit-learn, and related libraries. Design and build robust data pipelines and workflows leveraging Pandas, SQL, and Kedro. Create clear, reproducible analysis and reports in Jupyter Notebooks. Integrate machine learning models and data pipelines into production environments on AWS. Work with Langchain to build applications leveraging large language models and natural language processing workflows. Collaborate closely with data engineers, product managers, and business stakeholders to understand requirements and deliver impactful solutions. Optimize and monitor model performance in production and drive continuous improvement. Follow best practices for code quality, version control, and documentation. Required Skills and Experience 7+ years of professional experience in Data Science, Machine Learning, or a related field. Strong proficiency in Python and machine learning frameworks, especially scikit-learn. Deep experience working with data manipulation and analysis tools such as Pandas and SQL. Hands-on experience creating and sharing analyses in Jupyter Notebooks. Solid understanding of cloud services, particularly AWS (S3, EC2, Lambda, SageMaker, etc.). Experience with Kedro for pipeline development and reproducibility. Familiarity with Langchain and building applications leveraging LLMs is a strong plus. Ability to communicate complex technical concepts clearly to non-technical audiences. Strong problem-solving skills and a collaborative mindset. Nice to Have Experience with MLOps tools and practices (model monitoring, CI/CD pipelines for ML). Exposure to other cloud platforms (GCP, Azure). Knowledge of data visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Familiarity with modern LLM ecosystems and prompt engineering. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies