Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role We are seeking a highly experienced and strategic AI/ML Architect to lead the design, development, and deployment of scalable artificial intelligence and machine learning solutions. As a core member of our technical leadership team, you will play a pivotal role in building intelligent systems that drive innovation and transform digital healthcare delivery across our AI Driven Telemedicine platform. Key Responsibilities Architect AI/ML systems that support key business goals, from real-time diagnosis and predictive analytics to natural language conversations and recommendation engines. Design and oversee machine learning pipelines, model training, validation, deployment, and performance monitoring. Guide selection of ML frameworks (e.g., TensorFlow, PyTorch) and ensure proper MLOps practices (CI/CD, model versioning, reproducibility, drift detection). Collaborate with cross-functional teams (data engineers, product managers, UI/UX, backend developers) to integrate AI into real-time applications and APIs. Build and maintain scalable AI infrastructure, including data ingestion, storage, and processing layers in cloud environments (AWS, GCP, or Azure). Lead research and experimentation on generative AI, NLP, computer vision, and deep learning techniques relevant to healthcare use cases. Define data strategies, governance, and model explainability/ethics frameworks to ensure compliance with regulatory standards like HIPAA. Mentor and lead a growing team of ML engineers and data scientists. Qualifications Must-Have: Bachelor's or Master’s degree in Computer Science, AI, Data Science, or related field (PhD preferred). 7+ years of experience in AI/ML development, with at least 2 years in a lead or architect role. Proven experience designing and deploying production-grade ML systems at scale. Strong grasp of ML algorithms, deep learning, NLP, computer vision, and generative AI. Expertise in Python, ML libraries (TensorFlow, PyTorch, Scikit-learn), and ML Ops tools (MLflow, Kubeflow, SageMaker, etc.). Familiarity with data engineering pipelines (Airflow, Spark, Kafka) and cloud platforms. Strong communication and collaboration skills. Preferred: Experience with healthcare data standards (FHIR, HL7) and medical ontologies (SNOMED CT, ICD). Familiarity with AI ethics, fairness, and interpretability frameworks. Startup or early-stage product development experience.
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
Golden Eagle IT Technologies Pvt. Ltd. is looking for a skilled Data Engineer with 2 to 4 years of experience to join the team in Indore. The ideal candidate should have a solid background in data engineering, big data technologies, and cloud platforms. As a Data Engineer, you will be responsible for designing, building, and maintaining efficient, scalable, and reliable data pipelines. You will be expected to develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Additionally, you will design and implement data solutions on AWS, leveraging services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Working with messaging systems like Kafka for managing data streaming and real-time data processing will also be part of your responsibilities. Proficiency in Python and Scala for data processing, transformation, and automation is essential. Ensuring data quality and integrity across multiple sources and formats will be a key aspect of your role. Collaboration with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions is crucial. Optimizing and tuning data systems for performance and scalability, as well as implementing best practices for data security and compliance, are also expected. Preferred skills include experience with infrastructure as code tools like Pulumi, familiarity with GraphQL for API development, and exposure to machine learning and data science workflows, particularly using SageMaker. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies, strong programming skills in Python and Scala, knowledge of data warehousing concepts and tools, as well as excellent problem-solving and communication skills are required.,
Posted 3 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Company, Droisys is an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. We leverage deep technical expertise, Agile methodologies, and data-driven intelligence to modernize systems of engagement and simplify human/tech interaction. Amazing things happen when we work in environments where everyone feels a true sense of belonging and when candidates have the requisite skills and opportunities to succeed. At Droisys, we invest in our talent and support career growth, and we are always on the lookout for amazing talent who can contribute to our growth by delivering top results for our clients. Join us to challenge yourself and accomplish work that matters We are seeking a highly experienced Computer Vision Architect with deep expertise in Python to design and lead the development of cutting-edge vision-based systems. The ideal candidate will architect scalable solutions that leverage advanced image and video processing, deep learning, and real-time inference. You will collaborate with cross-functional teams to deliver high-performance, production-grade computer vision platforms. Key Responsibilities: Architect and design end-to-end computer vision solutions for real-world applications (e.g., object detection, tracking, OCR, facial recognition, scene understanding, etc.) Lead R&D initiatives and prototype development using modern CV frameworks(OpenCV, PyTorch, TensorFlow, etc.) Optimize computer vision models for performance, scalability, and deployment on cloud, edge, or embedded systems Define architecture standards and best practices for Python-based CV pipelines Collaborate with product teams, data scientists, and ML engineers to translate business requirements into technical solutions Stay updated with the latest advancements in computer vision, deep learning, and AI Mentor junior developers and contribute to code reviews, design discussions, and technical documentation Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field (PhD is a plus) 8+ years of software development experience, with 5+ years in computer vision and deep learning Proficient in Python and libraries such as OpenCV, NumPy, scikit-image, Pillow Experience with deep learning frameworks like PyTorch, TensorFlow, or Keras Strong understanding of CNNs, object detection (YOLO, SSD, Faster R-CNN), semantic segmentation, and image classification Knowledge of MLOps, model deployment strategies (e.g., ONNX, TensorRT), and containerization (Docker/Kubernetes) Experience working with video analytics, image annotation tools, and large-scale dataset pipelines Familiarity with edge deployment (Jetson, Raspberry Pi, etc.) or cloud AI services(AWS SageMaker, Azure ML, GCP AI) Droisys is an equal opportunity employer. We do not discriminate based on race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. Droisys believes in diversity, inclusion, and belonging, and we are committed to fostering a diverse work environment.
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a Data Scientist with a strong background in enterprise-scale machine learning, deep expertise in LLMs and Generative AI, and a clear understanding of the evolving Agentic AI ecosystemThe ideal candidate has hands-on experience developing predictive models, recommendation systems, and LLM-powered solutions, and is passionate about leveraging cutting-edge AI to solve complex enterprise challenges. This role will involve working closely with product, engineering, and business teams to design, build, and deploy impactful AI solutions that are both technically robust and business-aligned. The Core Responsibilities For The Job Include The Following ML and Predictive Systems Development: Design, develop, and deploy enterprise-grade machine learning models for recommendations, predictions, and personalization use cases. Work on problems such as churn prediction, intelligent routing, anomaly detection, and behavior modeling. Leverage techniques in supervised, unsupervised, and reinforcement learning as needed based on business context. LLMs And Generative AI Build and fine-tune LLM-based solutions (e. g., GPT, LLaMA, Claude, or open-source models) for tasks such as summarization, semantic search, document understanding, and copilots. Deliver production-ready GenAI projects, applying techniques like RAG (Retrieval-Augmented Generation), prompt engineering, fine-tuning, and vector search (e. g., FAISS, Pinecone, Weaviate). Collaborate with engineering to embed LLM workflows into enterprise applications, ensuring scalability and performance. Agentic AI And Ecosystem Engagement Contribute thought leadership and experimentation around Agentic AI architectures, task orchestration, memory management, tool integration, and decision autonomy. Stay ahead of trends in the open-source and commercial LLM/AI space, including LangChain, AutoGen, DSPy, and ADK-based systems. Develop internal PoCs or evaluate frameworks to assess viability for enterprise use. Collaboration And Delivery Work with cross-functional teams to identify AI opportunities and define technical roadmaps. Translate business needs into data science problems, define success metrics, and communicate results to stakeholders. Ensure model governance, monitoring, and explainability for AI systems in production. Requirements Master's or PhD in Computer Science, Data Science, Statistics, or related field. 5-8 years of experience in data science and ML, with strong enterprise project delivery experience. Proven success in building and deploying ML models and recommendation systems at scale. 2+ projects delivered involving LLMs and Generative AI, with hands-on experience in one or more of: OpenAI, Hugging Face Transformers, LangChain, Vector DBs, or model fine-tuning. Advanced Python programming skills and experience with ML libraries (e. g., Scikit-learn, XGBoost, PyTorch, TensorFlow). Experience with cloud-based ML/AI platforms (e. g., Vertex AI, AWS SageMaker, Azure ML). Strong understanding of system architecture, APIs, data pipelines, and model integration patterns. Preferred Qualifications Experience with Agentic AI frameworks and orchestration systems (LangChain, AutoGen, ADK, CrewAI). Familiarity with prompt optimization, tool chaining, task planning, and autonomous agents. Working knowledge of MLOps best practices, including model versioning, CI/CD for ML, and model monitoring. Strong communication skills and ability to advocate for AI-driven solutions across technical and non-technical teams. Regular follower of AI research, open-source trends, and GenAI product developments. This job was posted by Akshay Kumar Arumulla from Softility.
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join us as a Data Scientist You’ll design and implement data science tools and methods which harness our data in order to drive market leading purposeful customer solutions We’ll look to you to actively participate in the data community to identify and deliver opportunities to support the bank’s strategic direction through better use of data This is an opportunity to promote data literacy education with business stakeholders supporting them to foster a data driven culture and to make a real impact with your work We're offering this role at associate level What you'll do As a Data Scientist, you’ll bring together statistical, mathematical, machine-learning and software engineering skills to consider multiple solutions, techniques and algorithms to develop and implement ethically sound models end-to-end. We’ll look to you to understand the needs of business stakeholders, form hypotheses and identify suitable data and analytics solutions to meet those needs in order to support the achievement of our business strategy. You’ll Also Be Using data translation skills to work closely with business stakeholders to define detailed business questions, problems or opportunities which can be supported through analytics Applying a software engineering and product development lens to business problems, creating, scaling and deploying software driven products and services Working in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and scrum outcomes Selecting, building, training and testing machine learning models considering model valuation, model risk, governance and ethics, making sure that models are ready to implement and scale Iteratively building and prototyping data analysis pipelines to provide insights that will ultimately lead to production deployment The skills you'll need You’ll need a strong academic background in a STEM discipline such as Mathematics, Physics, Engineering or Computer Science. You’ll have atlease four years of experience with statistical modelling and machine learning techniques. We’ll also look for financial services knowledge, and an ability to identify wider business impact, risk or opportunities and make connections across key outputs and processes You’ll Also Demonstrate The ability to use data to solve business problems from hypotheses through to resolution Experience indata science, analytics, and machine learningwith strong understanding of statistical analysis, machine learning models and concepts, LLMs, and data management principles Proficiency in Python and relevant libraries such as Pandas, NumPy, and Scikit-learn Experience of cloud applications such as AWS Sagemaker and data visualisation tools Experience in synthesising, translating and visualising data and insights for key stakeholders Good communication skills with the ability to proactively engage with a wide range of stakeholders
Posted 3 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description As a Staff Machine Learning Engineer , you will drive AI programs, lead engagements, and independently develop innovative solutions that enhance decision-making, automate workflows, and create growth. You will own the end-to-end development of AI-powered applications, from solution design to deployment, leveraging pre-trained machine learning and generative AI models. You will work closely with cross-functional teams, proactively identifying opportunities to integrate AI capabilities into Experian's products and services while optimizing performance and scalability. Qualifications Experinece working in cloud environment with one of Databricks, Azure or AWS 8+ years experience building data-drive products and solutions Experience leading AI engagements Strong experience with AI APIs (OpenAI, Hugging Face, Google Vertex AI, AWS Bedrock) and fine-tuning models for production use. Deep understanding of machine learning, natural language processing (NLP), and generative AI evaluation techniques.Key Responsibilities Assist in Developing and Deploying Machine Learning Models: Support the development and deployment of machine learning models, including data preprocessing and performance evaluation in Python using sklearn, numpy and other standard libraries. Build and Maintain ML Pipelines: Help build and maintain scalable ML pipelines, and assist in automating model training workflows in Python using MLFlow, Databricks, Sagemaker or equivalent. Collaborate with Cross-Functional Teams: Work with product and data teams to align ML solutions with business needs and objectives. Write Clean and Documented Code: Write clean, well-documented code, following best practices for testing and version control. Use Sphinx and other auto documentation solutions to automate document generation. Support Model Monitoring and Debugging: Assist in monitoring and debugging models to improve their reliability and performance. Participate in Technical Discussions and Knowledge Sharing: Engage in technical discussions, code reviews, and knowledge-sharing sessions to learn and grow within the team. Day-to-Day ActivitiesOn a daily basis, you will work closely with senior ML engineers and data scientists to support various stages of the machine learning lifecycle. Your day-to-day activities will include: Data Preprocessing: Cleaning and preparing data for model training, ensuring data quality and consistency. Model Training: Assisting in the training of machine learning models, experimenting with different algorithms and hyperparameters. Performance Evaluation: Evaluating model performance using appropriate metrics and techniques, and identifying areas for improvement. Pipeline Maintenance: Building and maintaining ML pipelines, ensuring they are scalable and efficient. Code Development: Writing and maintaining clean, well-documented code, following best practices for testing and version control. Model Monitoring: Monitoring deployed models to ensure they are performing as expected, and assisting in debugging any issues that arise. Collaboration: Participating in team meetings, sprint planning, and daily stand-ups to stay aligned with project goals and timelines. Additional Information Our uniqueness is that we truly celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what truly matters; DEI, work/life balance, development, authenticity, engagement, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's strong people first approach is award winning; Great Place To Work™ in 24 countries, FORTUNE Best Companies to work and Glassdoor Best Places to Work (globally 4.4 Stars) to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is a critical part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, color, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Benefits Experian care for employee's work life balance, health, safety and wellbeing. In support of this endeavor, we offer best-in-class family well-being benefits, enhanced medical benefits and paid time off. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 3 days ago
2.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZSs Platform Development team designs, implements, tests and supports ZSs ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do: As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality products: automated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring: 1-3 years of experience in developing software, ideally building SaaS products and services Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Good hands on to work with AWS services (EC2, EMR, S3, Serverless stack, RDS, Sagemaker, IAM, EKS etc) Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Good to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required
Posted 3 days ago
4.0 - 5.0 years
6 - 8 Lacs
Gurgaon
On-site
Project description We are looking for a skilled Document AI / NLP Engineer to develop intelligent systems that extract meaningful data from documents such as PDFs, scanned images, and forms. In this role, you will build document processing pipelines using OCR and NLP technologies, fine-tune ML models for tasks like entity extraction and classification, and integrate those solutions into scalable cloud-based applications. You will collaborate with cross-functional teams to deliver high-performance, production-ready pipelines and stay up to date with advancements in the document understanding and machine learning space. Responsibilities Design, build, and optimize document parsing pipelines using tools like Amazon Textract, Azure Form Recognizer, or Google Document AI. Perform data preprocessing, labeling, and annotation for training machine learning and NLP models. Fine-tune or train models for tasks such as Named Entity Recognition (NER), text classification, and layout understanding using PyTorch, TensorFlow, or HuggingFace Transformers. Integrate document intelligence capabilities into larger workflows and applications using REST APIs, microservices, and cloud components (e.g., AWS Lambda, S3, SageMaker). Evaluate model and OCR accuracy, applying post-processing techniques or heuristics to improve precision and recall. Collaborate with data engineers, DevOps, and product teams to ensure solutions are robust, scalable, and meet business KPIs. Monitor, debug, and continuously enhance deployed document AI solutions. Maintain up-to-date knowledge of industry trends in OCR, Document AI, NLP, and machine learning. Skills Must have 4-5 years of hands-on experience in machine learning, document AI, or NLP-focused roles. Strong expertise in OCR tools and frameworks, especially Amazon Textract, Azure Form Recognizer, Google Document AI, or open-source tools like Tesseract, LayoutLM, or PaddleOCR. Solid programming skills in Python and familiarity with ML/NLP libraries: scikit-learn, spaCy, transformers, PyTorch, TensorFlow, etc. Experience working with structured and unstructured data formats, including PDF, images, JSON, and XML. Hands-on experience with REST APIs, microservices, and integrating ML models into production pipelines. Working knowledge of cloud platforms, especially AWS (S3, Lambda, SageMaker) or their equivalents. Understanding of NLP techniques such as NER, text classification, and language modeling. Strong debugging, problem-solving, and analytical skills. Clear verbal and written communication skills for technical and cross-functional collaboration. Nice to have N/A Other Languages English: B2 Upper Intermediate Seniority Senior Gurugram, India Req. VR-116250 AI/ML BCM Industry 29/07/2025 Req. VR-116250
Posted 3 days ago
2.0 years
0 Lacs
Haryana
On-site
Provectus helps companies adopt ML/AI to transform the ways they operate, compete, and drive value. The focus of the company is on building ML Infrastructure to drive end-to-end AI transformations, assisting businesses in adopting the right AI use cases, and scaling their AI initiatives organization-wide in such industries as Healthcare & Life Sciences, Retail & CPG, Media & Entertainment, Manufacturing, and Internet businesses. We are seeking a highly skilled Machine Learning (ML) Tech Lead with a strong background in Large Language Models (LLMs) and AWS Cloud services. The ideal candidate will oversee the development and deployment of cutting-edge AI solutions while managing a team of 5-10 engineers. This leadership role demands hands-on technical expertise, strategic planning, and team management capabilities to deliver innovative products at scale. Responsibilities: Leadership & Management Lead and manage a team of 5-10 engineers, providing mentorship and fostering a collaborative team environment; Drive the roadmap for machine learning projects aligned with business goals; Coordinate cross-functional efforts with product, data, and engineering teams to ensure seamless delivery. Machine Learning & LLM Expertise Design, develop, and fine-tune LLMs and other machine learning models to solve business problems; Evaluate and implement state-of-the-art LLM techniques for NLP tasks such as text generation, summarization, and entity extraction; Stay ahead of advancements in LLMs and apply emerging technologies; Expertise in multiple main fields of ML: NLP, Computer Vision, RL, deep learning and classical ML. AWS Cloud Expertise Architect and manage scalable ML solutions using AWS services (e.g., SageMaker, Lambda, Bedrock, S3, ECS, ECR, etc.); Optimize models and data pipelines for performance, scalability, and cost-efficiency in AWS; Ensure best practices in security, monitoring, and compliance within the cloud infrastructure. Technical Execution Oversee the entire ML lifecycle, from research and experimentation to production and maintenance; Implement MLOps and LLMOps practices to streamline model deployment and CI/CD workflows; Debug, troubleshoot, and optimize production ML models for performance. Team Development & Communication Conduct regular code reviews and ensure engineering standards are upheld; Facilitate professional growth and learning for the team through continuous feedback and guidance; Communicate progress, challenges, and solutions to stakeholders and senior leadership. Qualifications: Proven experience with LLMs and NLP frameworks (e.g., Hugging Face, OpenAI, or Anthropic models); Strong expertise in AWS Cloud Services; Strong experience in ML/AI, including at least 2 years in a leadership role; Hands-on experience with Python, TensorFlow/PyTorch, and model optimization; Familiarity with MLOps tools and best practices; Excellent problem-solving and decision-making abilities; Strong communication skills and the ability to lead cross-functional teams; Passion for mentoring and developing engineers; Familiarity with Amazon Bedrock would be considered a significant plus.
Posted 3 days ago
3.0 years
2 - 6 Lacs
Gurgaon
On-site
We are seeking a highly skilled AI Engineer with a strong foundation in machine learning, deep learning, cloud platforms , and computer vision to join our innovative tech team. You’ll design and implement scalable AI/ML pipelines, automate workflows, train and optimize models, and deploy solutions on cloud infrastructure. This is an opportunity to shape the future of intelligent systems across industries. Key Responsibilities: Design, develop, and deploy ML/DL models for various applications, including computer vision and predictive analytics. Build data pipelines and model training workflows on cloud platforms such as AWS, Azure, or GCP. Automate model retraining, evaluation, and deployment processes using MLOps best practices. Collaborate with cross-functional teams (data engineers, product managers, developers) to define project requirements and deliver AI-powered features. Develop and fine-tune custom algorithms tailored to specific domain problems. Integrate AI solutions into existing systems using APIs, containers, and cloud-native tools. Conduct data preprocessing, exploratory data analysis, and feature engineering. Research and evaluate the latest AI trends, tools, and frameworks to recommend enhancements. Write clear, maintainable, and efficient code with documentation for reproducibility and scaling. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related fields. 3+ years of hands-on experience in building and deploying machine learning/deep learning models. Strong programming skills in Python and frameworks like TensorFlow, PyTorch, OpenCV, Scikit-learn. Experience with computer vision libraries (OpenCV, YOLO, Detectron2, etc.). Proficiency in cloud platforms (AWS SageMaker, GCP Vertex AI, or Azure ML Studio). Experience with Docker, Kubernetes, or other orchestration tools. Familiarity with MLOps tools like MLflow, DVC, Kubeflow, or Airflow. Solid understanding of algorithms, data structures, and model optimization techniques. Exposure to RESTful APIs and real-time inference systems. Strong analytical, problem-solving, and communication skills. Nice to Have: Experience with NLP models and transformers (e.g., Hugging Face). Experience deploying models at scale in production environments. Knowledge of CI/CD pipelines for AI applications. Publications or contributions to open-source AI projects. Why Join Us? Job Type: Permanent Pay: ₹20,000.00 - ₹50,000.00 per month Work Location: In person
Posted 3 days ago
0 years
0 Lacs
Noida
On-site
Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Job Type: Full-time Work Location: In person
Posted 3 days ago
4.0 - 7.0 years
5 - 8 Lacs
Noida
On-site
Expertise in AWS services like EC2, CloudFormation, S3, IAM, ECS/EKS, EMR, QuickSight, SageMaker, Athena, Glue etc. Expertise in Hadoop platform administration and good debugging skills to resolve hive and spark related issues. Experience in designing, developing, configuring, testing and deploying cloud automation preferably in AWS Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in Python and Spark. Working knowledge of CI/CD tools and containers Key Responsibilities Interpreting and analyzing business requirements and converting them into high and low level designs. Designing, developing, configuring, testing and deploying cloud automation for Finance business unit using tools such as CloudFormation, Terraform, Ansible etc. while following the capability domain’s Engineering standards in an Agile environment End-to-end ownership of developing, configuring, unit testing and deploying developed code with quality and minimal supervision. Work closely with customers, business analysts and technology & project team to understand business requirements, drive the analysis and design of quality technical solutions that are aligned with business and technology strategies and comply with the organization's architectural standards. Understand and follow-up through change management procedures to implement project deliverables. Coordinate with support groups such as Enterprise Cloud Engineering teams, DevSecOps, Monitoring to get issues resolved with a quick turnaround time. Work with data science user community to address an issue in ML(machine learning) development life cycle. Required Qualifications Bachelor’s or Master’s degree in Computer Science or similar field 4 to 7 years of experience in automation on a major cloud (AWS, Azure or GCP) Experience in infrastructure provisioning using Ansible, AWS Cloud formation or Terraform, Python or PowerShell Working knowledge of AWS Services such as EC2, Cloud Formation, IAM, S3, EMR, ECS/EKS etc. Working knowledge of CI/CD tools and containers. Experience in hadoop administration in resolving hive/spark related issues. Proven understanding of common development tools, patterns and practices for the cloud. Experience writing automated unit tests in a major programming language Proven ability to write quality code by following best practices and guidelines. Strong problem-solving, multi-tasking and organizational skills. Good written and verbal communication skills. Demonstrable experience of working on a team that is geographically dispersed. Preferred Qualifications Experience with managing Hadoop platform and good in debugging hive/spark related issues. Cloud certification (AWS, Azure or GCP) Knowledge of UNIX/LINUX shell scripting About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Noida
On-site
For 3–5 Years Experience Please share your updated via anjali.sharma@genicminds.com _ AI/ML Engineer / Data Scientist _ 1. Core Technical Skills Python (NumPy, Pandas, Scikit-learn, Matplotlib/Seaborn, beautifulsoup, selenium) SQL(MySQL/PostgreSQL), Mongo DB, NLP, Computer Vision, RAG, Vector DBs, LLMs, Agentic AI, Neural Network, RNN, CNN, LSTMs, web scraping. ML algorithms: regression, decision trees, random forests, XGBoost, SVM, KMeans, DBSCAN, etc. Model evaluation: cross-validation, precision/recall, ROC-AUC, confusion matrix 2. AI/ML Frameworks Scikit-learn, TensorFlow or PyTorch Keras Familiar with pre-trained models (e.g., BERT, ResNet, Stable Diffusion) and transfer learning 3. Practical Experience Building and deploying end-to-end ML pipeline Experience with REST APIs or Flask/FastAPI-based model deployment Version control with Git 4. Data Handling Data preprocessing, feature engineering Experience with structured, semi-structured data and unstructured data Data visualization (Power BI, Tableau, or Python libs) 5. Cloud & DevOps Basics Working knowledge of AWS/GCP/Azure (S3, Lambda, SageMaker or equivalent) Docker CI/CD understanding 6. Soft Skills Good documentation practices Able to explain ML models to non-tech stakeholders Cross-functional collaboration experience Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person
Posted 3 days ago
3.0 - 5.0 years
4 - 6 Lacs
Noida
On-site
For 3–5 Years Experience – Mid-Level AI/ML Engineer / Data Scientist 1. Core Technical Skills Python (NumPy, Pandas, Scikit-learn, Matplotlib/Seaborn, beautifulsoup, selenium) SQL(MySQL/PostgreSQL), Mongo DB, NLP, Computer Vision, RAG, Vector DBs, LLMs, Agentic AI, Neural Network, RNN, CNN, LSTMs, web scraping. ML algorithms: regression, decision trees, random forests, XGBoost, SVM, KMeans, DBSCAN, etc. Model evaluation: cross-validation, precision/recall, ROC-AUC, confusion matrix 2. AI/ML Frameworks Scikit-learn, TensorFlow or PyTorch Keras Familiar with pre-trained models (e.g., BERT, ResNet, Stable Diffusion) and transfer learning 3. Practical Experience Building and deploying end-to-end ML pipelines Experience with REST APIs or Flask/FastAPI-based model deployment Version control with Git 4. Data Handling Data preprocessing, feature engineering Experience with structured, semi-structured data and unstructured data Data visualization (Power BI, Tableau, or Python libs) 5. Cloud & DevOps Basics Working knowledge of AWS/GCP/Azure (S3, Lambda, SageMaker or equivalent) Docker CI/CD understanding 6. Soft Skills Good documentation practices Able to explain ML models to non-tech stakeholders Cross-functional collaboration experience Job Types: Full-time, Permanent Pay: ₹450,000.00 - ₹650,000.00 per year Work Location: In person
Posted 3 days ago
3.0 - 8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
About the job What makes Techjays an inspiring place to work At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are looking for a detail-oriented and curious AI QA Engineer to join our growing QA team. You will play a critical role in ensuring the quality, safety, and reliability of our AI-powered products and features. If you're passionate about AI, testing complex systems, and driving high standards of quality—this role is for you! Primary Skills: QA Automation, Python, API Testing, AI/ML Testing, Prompt Evaluation, Adversarial Testing, Risk-Based Testing, LLM-as-a-Judge, Model Metrics Validation, Test Strategy. Secondary Skills: CI/CD Integration, Git, Cloud Platforms (AWS/GCP/Azure ML), MLFlow, Postman, Testim, Applitools, Collaboration Tools (Jira, Confluence), Synthetic Data Generation, AI Ethics & Bias Awareness. Experience: 3 - 8 Years Work Location: Coimbatore/ Remote Must-Have Skills: Foundational QA Skills Strong knowledge of test design, defect management, and QA lifecycle . Experience with risk-based testing and QA strategy. AI/ML Knowledge Basic understanding of machine learning workflows , training/inference cycles. Awareness of AI quality challenges : bias, fairness, transparency. Familiarity with AI evaluation metrics: accuracy, precision, recall, F1-score . Hands-on with prompt testing , synthetic data generation , and non-deterministic behavior validation. Technical Capabilities Python programming for test automation and data validation. Hands-on experience with API testing tools (Postman, Swagger, REST clients). Knowledge of test automation tools (e.g., PyTest , Playwright, Selenium). Familiarity with Git and version control best practices. Understanding of CI/CD pipelines and integration testing. Tooling (Preferred) Tools like Diffblue, Testim, Applitools, Kolena, Galileo, MLFlow, Weights & Biases . Basic understanding of cloud-based AI platforms (AWS Sagemaker, Azure ML, GCP Vertex AI). Soft Skills Excellent analytical thinking and attention to detail. Strong collaboration and communication skills to work across cross-functional teams. Proactive and pull-mode work ethic —self-starter who takes ownership. Passion for learning new technologies and contributing to AI quality practices. Roles & Responsibilities: Design, write, and execute test plans and test cases for AI/ML-based applications. Collaborate with data scientists, ML engineers, and developers to understand model behavior and expected outcomes. Perform functional, regression, and exploratory testing on AI components and APIs. Validate model outputs for accuracy, fairness, bias, and explainability . Implement and run adversarial testing , edge cases, and out-of-distribution data scenarios. Conduct prompt testing and evaluation for LLM (Large Language Model)-based applications. Use LLM-as-a-Judge and AI tools to automate evaluation of AI responses where possible. Validate data pipelines , datasets, and ETL workflows. Track model performance metrics such as precision, recall, F1-score , and flag potential degradation. Document defects, inconsistencies, and raise risks proactively with the team. What we offer: Best in packages Paid holidays and flexible paid time away Casual dress code & flexible working environment Medical Insurance covering self & family up to 4 lakhs per person. Work in an engaging, fast-paced environment with ample opportunities for professional development. Diverse and multicultural work environment Be part of an innovation-driven culture that provides the support and resources needed to succeed.
Posted 3 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient
Posted 3 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Overview We are looking for highly skilled with 4 to 5 years experienced Generative AI Engineer to design and deploy enterprise-grade GenAI systems. This role blends platform architecture, LLM integration, and operationalization—ideal for engineers with strong hands-on experience in large language models, RAG pipelines, and AI orchestration. Responsibilities Platform Leadership: Architect GenAI platforms powering copilots, document AI, multi-agent systems, and RAG pipelines. LLM Expertise: Build/fine-tune GPT, Claude, Gemini, LLaMA 2/3, Mistral; deep in RLHF, transformer internals, and multi-modal integration. RAG Systems: Develop scalable pipelines with embeddings, hybrid retrieval, prompt orchestration, and vector DBs (Pinecone, FAISS, pgvector). Orchestration & Hosting: Lead LLM hosting, LangChain/LangGraph/AutoGen orchestration, AWS SageMaker/Bedrock integration. Responsible AI: Implement guardrails for PII redaction, moderation, lineage, and access aligned with enterprise security standards. LLMOps/MLOps: Deploy CI/CD pipelines, automate tuning/rollout, handle drift, rollback, and incidents with KPI dashboards. Cost Optimization: Reduce TCO via dynamic routing, GPU autoscaling, context compression, and chargeback tooling. Agentic AI: Build autonomous, critic-supervised agents using MCP, A2A, LGPL patterns. Evaluation: Use LangSmith, BLEU, ROUGE, BERTScore, HIL to track hallucination, toxicity, latency, and sustainability. Skills Required 4–5 years in AI/ML (2+ in GenAI) Strong Python, PySpark, Scala; APIs via FastAPI, GraphQL, gRPC Proficiency with MLflow, Kubeflow, Airflow, Prompt flow Experience with LLMs, vector DBs, prompt engineering, MLOps Solid foundation in applied mathematics & statistics Nice to Have Open-source contributions, AI publications Hands-on with cloud-native GenAI deployment Deep interest in ethical AI and AI safety 2 Days WFO Mandatory Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
Posted 3 days ago
10.0 years
15 - 20 Lacs
Jaipur, Rajasthan, India
On-site
We are seeking a cross-functional expert at the intersection of Product, Engineering, and Machine Learning to lead and build cutting-edge AI systems. This role combines the strategic vision of a Product Manager with the technical expertise of a Machine Learning Engineer and the innovation mindset of a Generative AI and LLM expert. You will help define, design, and deploy AI-powered features , train and fine-tune models (including LLMs), and architect intelligent AI agents that solve real-world problems at scale. 🎯 Key Responsibilities 🧩 Product Management: Define product vision, roadmap, and AI use cases aligned with business goals. Collaborate with cross-functional teams (engineering, research, design, business) to deliver AI-driven features. Translate ambiguous problem statements into clear, prioritized product requirements. ⚙️ AI/ML Engineering & Model Development Develop, fine-tune, and optimize ML models, including LLMs (GPT, Claude, Mistral, etc.). Build pipelines for data preprocessing, model training, evaluation, and deployment. Implement scalable ML solutions using frameworks like PyTorch , TensorFlow , Hugging Face , LangChain , etc. Contribute to R&D for cutting-edge models in GenAI (text, vision, code, multimodal). 🤖 AI Agents & LLM Tooling Design and implement autonomous or semi-autonomous AI Agents using tools like AutoGen , LangGraph , CrewAI , etc. Integrate external APIs, vector databases (e.g., Pinecone, Weaviate, ChromaDB), and retrieval-augmented generation (RAG). Continuously monitor, test, and improve LLM behavior, safety, and output quality. 📊 Data Science & Analytics Explore and analyze large datasets to generate insights and inform model development. Conduct A/B testing, model evaluation (e.g., F1, BLEU, perplexity), and error analysis. Work with structured, unstructured, and multimodal data (text, audio, image, etc.). 🧰 Preferred Tech Stack / Tools Languages: Python, SQL, optionally Rust or TypeScript Frameworks: PyTorch, Hugging Face Transformers, LangChain, Ray, FastAPI Platforms: AWS, Azure, GCP, Vertex AI, Sagemaker ML Ops: MLflow, Weights & Biases, DVC, Kubeflow Data: Pandas, NumPy, Spark, Airflow, Databricks Vector DBs: Pinecone, Weaviate, FAISS Model APIs: OpenAI, Anthropic, Google Gemini, Cohere, Mistral Tools: Git, Docker, Kubernetes, REST, GraphQL 🧑💼 Qualifications Bachelor’s, Master’s, or PhD in Computer Science, Data Science, Machine Learning, or a related field. 10+ years of experience in core ML, AI, or Data Science roles. Proven experience building and shipping AI/ML products. Deep understanding of LLM architectures, transformers, embeddings, prompt engineering, and evaluation. Strong product thinking and ability to work closely with both technical and non-technical stakeholders. Familiarity with GenAI safety, explainability, hallucination reduction, and prompt testing, computer vision 🌟 Bonus Skills Experience with autonomous agents and multi-agent orchestration. Open-source contributions to ML/AI projects. Prior Startup Or High-growth Tech Company Experience. Knowledge of reinforcement learning, diffusion models, or multimodal AI. Skills: text,claude,vision,hugging face transformers,sagemaker,hallucination reduction,langchain,genai safety,machine learning,data science & analytics,transformers,crewai,gcp,open-source contributions to ml/ai projects,startup,chromadb,graphql,pipelines,diffusion models,llm architectures,prompt engineering,gpt,weaviate,cohere,structured, unstructured, and multimodal data,docker,autogen,ai/ml products,model development,git,ai use,a/b testing,core ml,code,ai/ml engineering & model development,vertex ai,architect intelligent ai agents,tensorflow,bleu,ai-driven features,error analysis,roadmap,typescript,retrieval-augmented generation (rag),model training,multimodal ai,weights & biases,image,generative ai,hugging face,ray,f1,explore and analyze large datasets,spark,kubernetes,data science,product management,autonomous agents,mlflow,multimodal,ai,rest,google gemini,model evaluation,computer vision,mistral,vector databases,sql,engineering,airflow,output quality,pinecone,langgraph,reinforcement learning,pandas,llms,rust,ai-powered features,fastapi,multi-agent orchestration,embeddings,python,aws,ml models,kubeflow,pytorch,azure,dvc,openai,faiss,databricks,audio,ai engineering,numpy,anthropic,define product vision
Posted 3 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Associate Project Manager – AI/ML , Java, Dot Net Experience: 8+ years (including 3+ years in project management) Notice Period: Immediate to 15 days Location: Coimbatore / Chennai 🔍 Job Summary We are seeking experienced Associate Project Managers with a strong foundation in AI/ML project delivery. The ideal candidate will have a proven track record of managing cross-functional teams, delivering complex software projects, and driving AI/ML initiatives from conception to deployment. This role requires a blend of project management expertise and technical understanding of machine learning systems, data pipelines, and model lifecycle management. ✅ Required Experience & Skills 📌 Project Management Minimum 3+ years of project management experience, including planning, tracking, and delivering software projects. Strong experience in Agile, Scrum, and SDLC/Waterfall methodologies. Proven ability to manage multiple projects and stakeholders across business and technical teams. Experience in budgeting, vendor negotiation, and resource planning. Proficiency in tools like MS Project, Excel, PowerPoint, ServiceNow, SmartSheet, and Lucidchart. 🤖 AI/ML Technical Exposure (Must-Have) Exposure to AI/ML project lifecycle: data collection, model development, training, validation, deployment, and monitoring. Understanding of ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and data platforms (e.g., Azure ML, AWS SageMaker, Databricks). Familiarity with MLOps practices, model versioning, and CI/CD pipelines for ML. Experience working with data scientists, ML engineers, and DevOps teams to deliver AI/ML solutions. Ability to translate business problems into AI/ML use cases and manage delivery timelines. 🧩 Leadership & Communication Strong leadership, decision-making, and organizational skills. Excellent communication and stakeholder management abilities. Ability to influence and gain buy-in from executive sponsors and cross-functional teams. Experience in building and maintaining relationships with business leaders and technical teams. 🎯 Roles & Responsibilities Lead AI/ML and software development projects from initiation through delivery. Collaborate with data science and engineering teams to define project scope, milestones, and deliverables. Develop and maintain detailed project plans aligned with business goals and technical feasibility. Monitor progress, manage risks, and ensure timely delivery of AI/ML models and software components. Coordinate cross-functional teams and ensure alignment between business, data, and engineering stakeholders. Track project metrics, ROI, and model performance post-deployment. Ensure compliance with data governance, security, and ethical AI standards. Drive continuous improvement in project execution and delivery frameworks. Stay updated on AI/ML trends and contribute to strategic planning for future initiatives.
Posted 3 days ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Over 12 years of extensive experience in AI/ML , with a proven track record of architecting and delivering enterprise-scale machine learning solutions across the Retail and FMCG domains . Demonstrated ability to align AI strategy with business outcomes in areas such as customer experience, dynamic pricing, demand forecasting, assortment planning, and inventory optimization. Deep expertise in Large Language Models (LLMs) and Generative AI , including OpenAI’s GPT family , ChatGPT , and emerging models like DeepSeek . Adept at designing domain-specific use cases such as intelligent product search, contextual recommendation engines, conversational commerce assistants, and automated customer engagement using Retrieval-Augmented Generation (RAG) pipelines. Strong hands-on experience developing and deploying advanced ML models using modern data science stacks including: Python (advanced programming with focus on clean, scalable codebases) TensorFlow and Scikit-learn (for deep learning and classical ML models) NumPy , Pandas (for data wrangling, transformation, and statistical analysis) SQL (for structured data querying, feature engineering, and pipeline optimization) Expert-level understanding of Deep Learning architectures (CNNs, RNNs, Transformers, BERT/GPT), and Natural Language Processing (NLP) techniques such as entity recognition, text summarization, semantic search, and topic modeling – with practical application in retail-focused scenarios like product catalog enrichment, personalized marketing, and voice/text-based customer interactions. Strong data engineering proficiency , with experience designing robust data pipelines, building scalable ETL workflows, and integrating structured and unstructured data from ERP, CRM, POS, and social media platforms. Proven ability to operationalize ML workflows through automated retraining, version control, and model monitoring. Significant experience deploying AI/ML solutions at scale on cloud platforms such as AWS (SageMaker, Bedrock) , Google Cloud Platform (Vertex AI) , and Azure Machine Learning . Skilled in designing cloud-native architectures for low-latency inference, high-volume batch scoring, and streaming analytics. Familiar with containerization (Docker), orchestration (Kubernetes), and CI/CD for ML (MLOps). Ability to lead cross-functional teams , translating technical concepts into business impact, and collaborating with marketing, supply chain, merchandising, and IT stakeholders. Comfortable engaging with executive leadership to influence digital and AI strategies at an enterprise level.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
This role is for one of the Weekday's clients Min Experience: 5 years Location: Remote (India), Bengaluru, Chennai JobType: full-time We are seeking a skilled ML (Data) Platform Engineer to help scale a next-generation AutoML platform. This role sits at the critical intersection of machine learning, data infrastructure, and platform engineering. You will work on systems central to feature engineering, data management, and time series forecasting at scale. This is not your typical ETL role — the position involves building powerful data platforms that support automated model development, experimentation workflows, and high-reliability data lineage systems. If you're passionate about building scalable systems for both ML and analytics use cases, this is a high-impact opportunity. Requirements Key Responsibilities: Design, build, and scale robust data management systems that power AutoML and forecasting platforms. Own and enhance feature stores and associated engineering workflows. Establish and enforce strong data SLAs and build lineage systems for time series pipelines. Collaborate closely with ML engineers, infrastructure, and product teams to ensure platform scalability and usability. Drive key architectural decisions related to data versioning, distribution, and system composability. Contribute to designing reusable platforms to address diverse supply chain challenges. Must-Have Qualifications: Strong experience with large-scale and distributed data systems. Hands-on expertise in ETL workflows, data lineage, and reliability tooling. Solid understanding of ML feature engineering and experience building or maintaining feature stores. Exposure to time series forecasting systems or AutoML platforms. Strong analytical and problem-solving skills, with the ability to deconstruct complex platform requirements. Good-to-Have Qualifications: Familiarity with modern data infrastructure tools such as Apache Iceberg, ClickHouse, or Data Lakes. Product-oriented mindset with an ability to anticipate user needs and build intuitive systems. Experience with building composable, extensible platform components. Previous exposure to AutoML frameworks such as SageMaker, Vertex AI, or equivalent internal ML platforms. Skills: MLOps, Data Engineering, Big Data, ETL, Feature Store, Feature Engineering, AutoML, Forecasting Pipelines, Data Management
Posted 3 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
This role is for one of the Weekday's clients Min Experience: 5 years Location: Remote (India), Bengaluru, Chennai JobType: full-time We are seeking a skilled ML (Data) Platform Engineer to help scale a next-generation AutoML platform. This role sits at the critical intersection of machine learning, data infrastructure, and platform engineering. You will work on systems central to feature engineering, data management, and time series forecasting at scale. This is not your typical ETL role — the position involves building powerful data platforms that support automated model development, experimentation workflows, and high-reliability data lineage systems. If you're passionate about building scalable systems for both ML and analytics use cases, this is a high-impact opportunity. Requirements Key Responsibilities: Design, build, and scale robust data management systems that power AutoML and forecasting platforms. Own and enhance feature stores and associated engineering workflows. Establish and enforce strong data SLAs and build lineage systems for time series pipelines. Collaborate closely with ML engineers, infrastructure, and product teams to ensure platform scalability and usability. Drive key architectural decisions related to data versioning, distribution, and system composability. Contribute to designing reusable platforms to address diverse supply chain challenges. Must-Have Qualifications: Strong experience with large-scale and distributed data systems. Hands-on expertise in ETL workflows, data lineage, and reliability tooling. Solid understanding of ML feature engineering and experience building or maintaining feature stores. Exposure to time series forecasting systems or AutoML platforms. Strong analytical and problem-solving skills, with the ability to deconstruct complex platform requirements. Good-to-Have Qualifications: Familiarity with modern data infrastructure tools such as Apache Iceberg, ClickHouse, or Data Lakes. Product-oriented mindset with an ability to anticipate user needs and build intuitive systems. Experience with building composable, extensible platform components. Previous exposure to AutoML frameworks such as SageMaker, Vertex AI, or equivalent internal ML platforms. Skills: MLOps, Data Engineering, Big Data, ETL, Feature Store, Feature Engineering, AutoML, Forecasting Pipelines, Data Management
Posted 3 days ago
0.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore North, India | Posted on 07/29/2025 Share job via email Share this job with your network Job Information Job Type Full time Date Opened 07/29/2025 Project Code PRJ000 Industry IT Services Work Experience 5- 10 years City Bangalore North State/Province Karnataka Country India Zip/Postal Code 560001 About Us
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough