Jobs
Interviews

1845 Mlflow Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

At PwC, our data and analytics team focuses on utilizing data to drive insights and support informed business decisions. We leverage advanced analytics techniques to assist clients in optimizing their operations and achieving strategic goals. As a data analysis professional at PwC, your role will involve utilizing advanced analytical methods to extract insights from large datasets, enabling data-driven decision-making. Your expertise in data manipulation, visualization, and statistical modeling will be pivotal in helping clients solve complex business challenges. PwC US - Acceleration Center is currently seeking a highly skilled MLOps/LLMOps Engineer to play a critical role in deploying, scaling, and maintaining Generative AI models. This position requires close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure the seamless integration and operation of GenAI models within production environments at PwC and for our clients. The ideal candidate will possess a strong background in MLOps practices and a keen interest in Generative AI technologies. With a preference for candidates with 4+ years of hands-on experience, core qualifications for this role include: - 3+ years of experience developing and deploying AI models in production environments, alongside 1 year of working on proofs of concept and prototypes. - Proficiency in software development, including building and maintaining scalable, distributed systems. - Strong programming skills in languages such as Python and familiarity with ML frameworks like TensorFlow and PyTorch. - Knowledge of containerization and orchestration tools like Docker and Kubernetes. - Understanding of cloud platforms such as AWS, GCP, and Azure, including their ML/AI service offerings. - Experience with continuous integration and delivery tools like Jenkins, GitLab CI/CD, or CircleCI. - Familiarity with infrastructure as code tools like Terraform or CloudFormation. Key Responsibilities: - Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. - Design and manage CI/CD pipelines specialized for ML workflows, including deploying generative models like GANs, VAEs, and Transformers. - Monitor and optimize AI model performance in production, utilizing tools for continuous validation, retraining, and A/B testing. - Collaborate with data scientists and ML researchers to translate model requirements into scalable operational frameworks. - Implement best practices for version control, containerization, and orchestration using industry-standard tools. - Ensure compliance with data privacy regulations and company policies during model deployment. - Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. - Stay updated with the latest MLOps and Generative AI developments to enhance AI capabilities. Project Delivery: - Design and implement scalable deployment pipelines for ML/GenAI models to transition them from development to production environments. - Oversee the setup of cloud infrastructure and automated data ingestion pipelines to meet GenAI workload requirements. - Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures. Client Engagement: - Collaborate with clients to understand their business needs and design ML/LLMOps solutions. - Present technical approaches and results to technical and non-technical stakeholders. - Conduct training sessions and workshops for client teams. - Create comprehensive documentation and user guides for clients. Innovation And Knowledge Sharing: - Stay updated with the latest trends in MLOps/LLMOps and Generative AI. - Develop internal tools and frameworks to accelerate model development and deployment. - Mentor junior team members and contribute to technical publications. Professional And Educational Background: - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Python Engineer with 2-4 years of experience, you will be responsible for building, deploying, and scaling Python applications along with AI/ML solutions. Your strong programming skills will be put to use in developing intelligent solutions and collaborating closely with clients and software engineers to implement machine learning models. You should be an expert in Python, with advanced knowledge of Flask/FastAPI and server programming to implement complex business logic. Understanding fundamental design principles behind scalable applications is crucial. Independently designing, developing, and deploying machine learning models and AI algorithms tailored to business requirements will be a key aspect of your role. Your responsibilities will include solving complex technical challenges through innovative AI/ML solutions, building and maintaining integrations (e.g., APIs) for machine learning models, conducting data preprocessing and feature engineering, and optimizing datasets for model training and inference. Monitoring and continuously improving model performance in production environments, focusing on scalability and efficiency, will also be part of your tasks. Managing model deployment, monitoring, and scaling using tools like Docker, Kubernetes, and cloud services will be essential. You will need to develop integration strategies for smooth communication between APIs and troubleshoot integration issues. Creating and maintaining comprehensive documentation for AI/ML projects will be necessary, along with staying updated on emerging trends and technologies in AI/ML. Key Skills Required: - Proficiency in Python, R, or similar languages commonly used in ML/AI development - Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML libraries - Strong knowledge of data preprocessing, data cleaning, and feature engineering - Familiarity with model deployment using Docker, Kubernetes, or cloud platforms - Understanding of statistical methods, probability, and data-driven decision-making processes - Proficient in querying databases for ML projects - Experience with ML lifecycle management tools like MLflow, Kubeflow - Familiarity with NLP frameworks for language-based AI solutions - Exposure to computer vision techniques - Experience with managed ML services like AWS SageMaker, Azure Machine Learning, or Google Cloud AI Platform - Familiarity with agile workflows and DevOps or CI/CD pipelines Good to Have Skills: - Exposure to big data processing tools like Spark, Hadoop - Experience with agile development methodologies The job location for this role is in Ahmedabad/Pune, and the required educational qualifications include a UG degree in BE/BTech or PG degree in ME/M-Tech/MCA/MSC-IT/Data Science, AI, Machine Learning, or a related field.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

kochi, kerala

On-site

As a highly skilled Senior Machine Learning Engineer, you will leverage your expertise in Deep Learning, Large Language Models (LLMs), and MLOps/LLMOps to design, optimize, and deploy cutting-edge AI solutions. Your responsibilities will include developing and scaling deep learning models, fine-tuning LLMs (e.g., GPT, Llama), and implementing robust deployment pipelines for production environments. You will be responsible for designing, training, fine-tuning, and optimizing deep learning models (CNNs, RNNs, Transformers) for various applications such as NLP, computer vision, or multimodal tasks. Additionally, you will fine-tune and adapt LLMs for domain-specific tasks like text generation, summarization, and semantic similarity. Experimenting with RLHF (Reinforcement Learning from Human Feedback) and alignment techniques will also be part of your role. In the realm of Deployment & Scalability (MLOps/LLMOps), you will build and maintain end-to-end ML pipelines for training, evaluation, and deployment. Deploying LLMs and deep learning models in production environments using frameworks like FastAPI, vLLM, or TensorRT is crucial. You will optimize models for low-latency, high-throughput inference and implement CI/CD workflows for ML systems using tools like MLflow and Kubeflow. Monitoring & Optimization will involve setting up logging, monitoring, and alerting for model performance metrics such as drift, latency, and accuracy. Collaborating with DevOps teams to ensure scalability, security, and cost-efficiency of deployed models will also be part of your responsibilities. The ideal candidate will possess 5-7 years of hands-on experience in Deep Learning, NLP, and LLMs. Strong proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers, and LLM frameworks is essential. Experience with model deployment tools like Docker, Kubernetes, and FastAPI, along with knowledge of MLOps/LLMOps best practices and familiarity with cloud platforms (AWS, GCP, Azure) are required qualifications. Preferred qualifications include contributions to open-source LLM projects, showcasing your commitment to advancing the field of machine learning.,

Posted 2 weeks ago

Apply

20.0 years

0 Lacs

Sholinganallur, Tamil Nadu, India

On-site

About Us For over 20 years, Smart Data Solutions has been partnering with leading payer organizations to provide automation and technology solutions enabling data standardization and workflow automation. The company brings a comprehensive set of turn-key services to handle all claims and claims-related information regardless of format (paper, fax, electronic), digitizing and normalizing for seamless use by payer clients. Solutions include intelligent data capture, conversion and digitization, mailroom management, comprehensive clearinghouse services and proprietary workflow offerings. SDS’ headquarters are just outside of St. Paul, MN and leverages dedicated onshore and offshore resources as part of its service delivery model. The company counts over 420 healthcare organizations as clients, including multiple Blue Cross Blue Shield state plans, large regional health plans and leading independent TPAs, handling over 500 million transactions of varying types annually with a 98%+ customer retention rate. SDS has also invested meaningfully in automation and machine learning capabilities across its tech-enabled processes to drive scalability and greater internal operating efficiency while also improving client results. SDS recently partnered with a leading growth-oriented investment firm, Parthenon Capital, to further accelerate expansion and product innovation. Location : 6th Floor, Block 4A, Millenia Business Park, Phase II MGR Salai, Kandanchavadi , Perungudi Chennai 600096, India. Smart Data Solutions is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, disability, veteran status, age, marital status, pregnancy, genetic information, or other legally protected status To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed above are representative of the knowledge skill and or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform essential job functions. Due to access to Protected Healthcare Information, employees in this role must be free of felony convictions on a background check report. Responsibilities Duties and Responsibilities include but are not limited to: Design and build ML pipelines for OCR extraction, document image processing, and text classification tasks. Fine-tune or prompt large language models (LLMs) (e.g., Qwen, GPT, LLaMA , Mistral) for domain-specific use cases. Develop systems to extract structured data from scanned or unstructured documents (PDFs, images, TIFs). Integrate OCR engines (Tesseract, EasyOCR , AWS Textract , etc.) and improve their accuracy via pre-/post-processing. Handle natural language processing (NLP) tasks such as named entity recognition (NER), summarization, classification, and semantic similarity. Collaborate with product managers, data engineers, and backend teams to productionize ML models. Evaluate models using metrics like precision, recall, F1-score, and confusion matrix, and improve model robustness and generalizability. Maintain proper versioning, reproducibility, and monitoring of ML models in production. The duties set forth above are essential job functions for the role. Reasonable accommodations may be made to enable individuals with disabilities to perform essential job functions. Skills And Qualifications 4–5 years of experience in machine learning, NLP, or AI roles Proficiency with Python and ML libraries such as PyTorch , TensorFlow, scikit-learn, Hugging Face Transformers. Experience with LLMs (open-source or proprietary), including fine-tuning or prompt engineering. Solid experience in OCR tools (Tesseract, PaddleOCR , etc.) and document parsing. Strong background in text classification, tokenization, and vectorization techniques (TF-IDF, embeddings, etc.). Knowledge of handling unstructured data (text, scanned images, forms). Familiarity with MLOps tools: MLflow , Docker, Git, and model serving frameworks. Ability to write clean, modular, and production-ready code. Experience working with medical, legal, or financial document processing. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate ) and semantic search. Understanding of document layout analysis (e.g., LayoutLM , Donut, DocTR ). Familiarity with cloud platforms (AWS, GCP, Azure) and deploying models at scale

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Lead 4-8 data scientists to deliver ML capabilities within a Databricks-Azure platform Guide delivery of complex ML systems that align with product and platform goals Balance scientific rigor with practical engineering Define model lifecycle, tooling, and architectural direction Requirements Skills & Experience Advanced ML: Supervised/unsupervised modeling, time-series, interpretability, MLflow, Spark, TensorFlow/PyTorch Engineering: Feature pipelines, model serving, CI/CD, production deployment Leadership: Mentorship, architectural alignment across subsystems, experimentation strategy Communication: Translate ML results into business impact Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day This is a contract role based in Abu Dhabi. If relocation from India is required, the company will cover travel and accommodation expenses in addition to your salary About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Responsibilities Act as both a hands-on tech lead and product manager Deliver data/ML platforms and pipelines in a Databricks-Azure environment Lead a small delivery team and coordinate with enabling teams for product, architecture, and data science Translate business needs into product strategy and technical delivery with a platform-first mindset Requirements Skills & Experience Technical: Python, SQL, Databricks, Delta Lake, MLflow, Terraform, medallion architecture, data mesh/fabric, Azure Product: Agile delivery, discovery cycles, outcome-focused planning, trunk-based development Collaboration: Able to coach engineers, work with cross-functional teams, and drive self-service platforms Communication: Clear in articulating decisions, roadmap, and priorities Benefits What you get Best in class salary: We hire only the best, and we pay accordingly Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day This is a contract role based in Abu Dhabi. If relocation from India is required, the company will cover travel and accommodation expenses in addition to your salary About Us Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media and Entertainment companies in the world! We're headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies. We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company's success will be huge. You'll have the chance to work with experienced leaders who have built and led multiple tech, product and design teams. Here's a quick guide to getting to know us better: Watch our CEO, Hardik Jagda, tell you all about Proximity Read about Proximity's values and meet some of our Proxonauts here Explore our website, blog, and the design wing — Studio Proximity Get behind-the-scenes with us on Instagram! Follow @ProxWrks and @H.Jagda

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Vyva Consulting Inc. is a trusted partner in Sales Performance Management (SPM) and Incentive Compensation Management (ICM), specializing in delivering top-tier software consulting solutions. We help organizations optimize their sales operations, boost revenue, and maximize value. Our seasoned experts work with leading products such as Xactly, Varicent, and SPIFF, offering comprehensive implementation and post-implementation services. We focus on enhancing sales compensation strategies to drive business success. Role Description This is a full-time, on-site role for an Artificial Intelligence Intern, located in Hyderabad. We are seeking a motivated AI Engineer Intern to join our team and contribute to cutting-edge AI/ML projects. This internship offers hands-on experience with large language models, generative AI, and modern AI frameworks while working on real-world applications that impact our business objectives. What You'll DoCore Responsibilities LLM Integration & Development: Build and prototype LLM-powered features using frameworks like LangChain, OpenAI SDK, or similar tools for content automation and intelligent workflows RAG System Implementation: Design and optimize Retrieval-Augmented Generation systems including document ingestion, chunking strategies, embedding generation, and vector database integration Data Pipeline Development: Create robust data pipelines for AI/ML workflows, including data collection, cleaning, preprocessing, and annotation of large datasets Model Experimentation: Conduct experiments to evaluate, fine-tune, and optimize AI models for accuracy, performance, and scalability across different use cases Vector Database Operations: Implement similarity search solutions using vector databases (FAISS, Pinecone, Chroma) for intelligent Q&A, content recommendation, and context-aware responses Prompt Engineering: Experiment with advanced prompt engineering techniques to optimize outputs from generative models and ensure content quality Research & Innovation: Stay current with latest AI/ML advancements, research new architectures and techniques, and build proof-of-concept implementations Technical Implementation Deploy AI micro services and agents using containerization (Docker) and orchestration tools Collaborate with cross-functional teams (product, design, engineering) to align AI features with business requirements Create comprehensive documentation including system diagrams, API specifications, and implementation guides Analyze model performance metrics, document findings, and propose data-driven improvements Participate in code reviews and contribute to best practices for AI/ML development Required QualificationsEducation & Experience Currently pursuing or recently completed Bachelor's/Master's degree in Computer Science, Data Science, AI/ML, or related field 6+ months of hands-on experience with AI/ML projects (academic, personal, or professional) Demonstrable portfolio of AI/ML projects via GitHub repositories, Jupyter notebooks, or deployed applications Technical Skills Programming: Strong Python proficiency with experience in AI/ML libraries (NumPy, Pandas, Scikit-learn) LLM Experience: Practical experience with large language models (OpenAI GPT, Claude, open-source models) including API integration and fine-tuning AI Frameworks: Familiarity with at least one: LangChain, OpenAI Agents SDK, AutoGen, or similar agentic AI frameworks RAG Architecture: Understanding of RAG system components and prior implementation experience (even in academic projects) Vector Databases: Experience with vector similarity search using FAISS, Chroma, Pinecone, or similar tools Deep Learning: Familiarity with PyTorch or TensorFlow for model development and fine-tuning Screening Criteria To effectively evaluate candidates, we will assess: Portfolio Quality: Live demos or well-documented projects showing AI/ML implementation Technical Depth: Ability to explain RAG architecture, vector embeddings, and LLM fine-tuning concepts Problem-Solving: Approach to handling real-world AI challenges like hallucination, context management, and model evaluation Code Quality: Clean, documented Python code with proper version control practices Preferred QualificationsAdditional Technical Skills Full-Stack Development: Experience building web applications with AI/ML backends Data Analytics: Proficiency in data manipulation (Pandas/SQL), visualization (Matplotlib/Seaborn), and statistical analysis MLOps/DevOps: Experience with Docker, Kubernetes, MLflow, or CI/CD pipelines for ML models Cloud Platforms: Familiarity with AWS, Azure, or GCP AI/ML services Databases: Experience with both SQL (PostgreSQL) and NoSQL (Elasticsearch, MongoDB) databases Soft Skills & Attributes Analytical Mindset: Strong problem-solving skills with attention to detail in model outputs and data quality Communication: Ability to explain complex AI concepts clearly to both technical and non-technical stakeholders Collaboration: Proven ability to work effectively in cross-functional teams Learning Agility: Demonstrated ability to quickly adapt to new technologies and frameworks Initiative: Self-motivated with ability to work independently and drive projects forward What We OfferProfessional Growth Mentorship: Work directly with senior AI engineers and receive structured guidance Real Impact: Contribute to production AI systems used by real customers Learning Opportunities: Access to latest AI tools, frameworks, and industry conferences Full-Time Conversion: Potential for full-time offer based on performance and business needs Work Environment Employee-First Culture: Flexible work arrangements with emphasis on results Innovation Focus: Opportunity to work on cutting-edge AI applications Collaborative Team: Supportive environment that values diverse perspectives and ideas Competitive Compensation: Market-competitive internship stipend Application RequirementsPortfolio Submission Please include the following in your application: GitHub Repository: Link to your best AI/ML projects with detailed README files Project Demo: Video walkthrough or live demo of your most impressive AI application Technical Blog/Documentation: Any technical writing about AI/ML concepts or implementations Resume: Highlighting relevant coursework, projects, and any AI/ML experience Technical Assessment Qualified candidates will complete a technical assessment covering: Python programming and AI/ML libraries LLM integration and prompt engineering RAG system design and implementation Vector database operations and similarity search Model evaluation and optimization techniques Ready to shape the future of AI? Apply now and join our team of innovative engineers building next-generation AI solutions.

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Company Name: Bilight Solutions Location: Chennai, Tamil Nadu (Nungambakkam) Job Type: Full-time Experience Level: Mid-level (2-5 years) About the Role We are looking for a skilled and motivated Data Scientist to join our team in Chennai. This is a work-from-office position located in the Nungambakkam area. You will be responsible for leveraging data to solve complex business challenges, driving strategic decisions, and creating innovative solutions. You will work on the full lifecycle of data science projects, from problem formulation and data collection to model development, deployment, and monitoring. We are looking for candidates who can join immediately and have a strong ability to work collaboratively and lead. Key Responsibilities Team Leadership: Take on a leadership role within the data science team, guiding junior members and ensuring project success. Problem Solving: Collaborate with stakeholders to understand business problems and formulate data-driven solutions. Data Analysis: Collect, clean, and analyze large and complex datasets to identify trends, patterns, and insights. Model Development: Design, develop, and implement statistical models, machine learning algorithms, and predictive systems. ETL & Data Pipelines: Be well-versed in the ETL process to build and maintain data pipelines for efficient data flow. Data Visualization: Utilize Tableau Prep, Desktop, and Server to create compelling data visualizations and interactive dashboards. Communication: Present findings and recommendations to both technical and non-technical audiences, telling a clear and compelling story with data. Collaboration: Work closely with data engineers, software developers, product managers, and business analysts. Continuous Learning: Stay up-to-date with the latest advancements in data science, machine learning, and AI, and apply them to business problems. Required Qualifications Education: Bachelor’s, Master’s, or Ph.D. in a quantitative field such as Computer Science, Statistics, Mathematics, Physics, Engineering, or a related discipline. Experience: 2-5 years of professional experience as a Data Scientist or in a similar role, with a proven track record of team leadership. Technical Skills: Strong proficiency in SQL for data querying and manipulation. Strong proficiency in Python and its data science libraries (e.g., Pandas, NumPy, Scikit-learn). Proven experience with the ETL process and building data pipelines. Data Visualization: In-depth knowledge of and hands-on experience with the Tableau suite, including Tableau Prep, Desktop, and Server . Machine Learning: Deep knowledge of machine learning concepts and algorithms (e.g., regression, classification, clustering, time series analysis). Statistical Analysis: Strong foundation in statistical concepts, including hypothesis testing, experimental design, and predictive modeling. Preferred Skills & Experience: Experience with big data technologies (e.g., Spark, Hadoop). Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data science services. Knowledge of MLOps tools and practices (e.g., Docker, Kubernetes, MLflow, Airflow). Experience with Large Language Models (LLM) and Natural Language Processing (NLP). Proven experience in a team leadership or mentorship role. Job Types: Full-time, Permanent Pay: From ₹450,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Company Overview Founded in 2010, we've been recognized as a "Best Places to Work" and have offices in the US (Boulder), UK (London) and India (Chennai). However, we are a remote-first company with employees across the globe! Today, we are a leading B2B marketing provider that offers two distinct solutions: Integrate Lead management & data governance SaaS platform for marketing operations and demand marketers. The Integrate platform makes every lead clean, compliant, and actionable, freeing enterprise B2B marketers from bad data and operational headaches so they can focus on what matters: generating revenue. Pipeline360 Media solutions that combine three powerful demand generation tools: targeted display, content syndication, and a comprehensive marketplace model. Pipeline360 ensures that marketers achieve 100% compliant and marketable leads by effectively engaging with audiences much earlier in the buying cycle, connecting with buyers at every stage of the process, and optimizing programs to drive performance. Our Mission Integrate: exists to make your lead data marketable so you can drive pipeline. Pipeline360: exists to make the unpredictable predictable. Why us? We are an organization of integrity, talent, passion, and vision with a long track record of growth, customer success, and a commitment to driving leading innovation and delivering world-class customer experience. The Role: Integrate's data is treated as a critical corporate asset and is seen as a competitive advantage in our business. As a Lead Data Engineer you will be working in one of the world's largest cloud-based data lakes. You should be skilled in the architecture of data warehouse solutions for the Enterprise using multiple platforms (EMR, RDBMS, Columnar, Cloud, Snowflake). You should have extensive experience in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. Responsibilities: Design and develop workflows, programs, and ETL to support data ingestion, curation, and provisioning of fragmented data for Data Analytics, Product Analytics and AI. Work closely with Data Scientists, Software Engineers, Product Managers, Product Analysts and other key stakeholders to gather and define requirements for Integrate's data needs. Use Scala, SQL Snowflake, and BI tools to deliver data to customers. Understand MongoDB/PostgreSQL and transactional data workflows. Design data models and build data architecture that enables reporting, analytics, advanced AI/ML and Generative AI solutions. Develop an understanding of the data and build business acumen. Develop and maintain Datawarehouse and Datamart in the cloud using Snowflake. Create reporting dashboards for internal and client stakeholders. Understand the business use cases and customer value behind large sets of data and develop meaningful analytic solutions. Basic Qualifications: Advanced degree in Statistics, Computer Science or related technical/scientific field. 9+ years experience in a Data Engineer development role. Advanced knowledge of SQL, Python, and data processing workflow. Nice to have Spark/Scala, MLFlow, and AWS experience. Strong experience and advanced technical skills writing APIs. Extensive knowledge of Data Warehousing, ETL and BI architectures, concepts, and frameworks. And also strong in metadata definition, data migration and integration with emphasis on both high end OLTP and business Intelligence solutions. Develop complex Stored procedure and queries to provide to the application along with reporting solutions too. Optimize slow-running queries and optimize query performance. Create optimized queries and data migration scripts Leadership skillsets to mentor and train junior team members and stakeholders. Capable of creating long-term and short-term data architecture vision and tactical roadmap to achieve the data architecture vision beginning from the current state Strong data management abilities (i.e., understanding data reconciliations). Capable of facilitating data discovery sessions involving business subject matter experts. Strong communication/partnership skills to gain the trust of stakeholders. Knowledge of professional software engineering practices & best practices for the full software development lifecycle, including coding standards, code reviews, source control management, build processes, testing, and operations. Preferred Qualifications: Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets. Experience building data products incrementally and integrating and managing datasets from multiple sources. Query performance tuning skills using Unix profiling tools and SQL Experience leading large-scale data warehousing and analytics projects, including using AWS technologies – Snowflake, Redshift, S3, EC2, Data-pipeline and other big data technologies. Integrate in the News: Best Tech Startups in Arizona (2018-2021) Integrate Acquires Akkroo Integrate Acquires ListenLoop Why Four MarTech CEO's Bet Big on Integrate

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you’ll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package, which includes medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your Impact You will work in multi-disciplinary global Life Science focused environments, harnessing data to provide real-world impact for organizations globally. Our Life Sciences practice focuses on helping clients bring life-saving medicines and medical treatments to patients. This practice is one of the fastest growing practices and is comprised of a tight-knit community of consultants, research, solution, data, and practice operations colleagues across the firm. It is also one of the most globally connected sector practices, offering ample global exposure. The LifeSciences.AI (LS.AI) team is the practice’s assetization arm, focused on creating reusable digital and analytics assets to support our client work. LS.AI builds and operates tools that support senior executives in pharma and device manufacturers, for whom evidence-based decision-making and competitive intelligence are paramount. The team works directly with clients across Research & Development (R&D), Operations, Real World Evidence (RWE), Clinical Trials and Commercial to build and scale digital and analytical approaches to addressing their most persistent priorities. What you’ll learn: How to apply data and machine learning engineering, as well as product development expertise, to address complex client challenges through part-time staffing on client engagements. How to support the manager of data and machine learning engineering in developing a roadmap for data and machine learning engineering assets across cell-level initiatives. How to productionalize AI prototypes and create deployment-ready solutions. How to translate engineering concepts and explain design/architecture trade-offs and decisions to senior stakeholders. How to write optimized code to enhance our AI Toolbox and codify methodologies for future deployment. How to collaborate effectively within a multi-disciplinary team. How to leverage new technologies and apply problem-solving skills in a multicultural and creative environment. You will work on the frameworks and libraries that our teams of Data Scientists and Data Engineers use to progress from data to impact. You will guide global companies through analytics solutions to transform their businesses and enhance performance across industries including life sciences, global energy and materials (GEM), and advanced industries (AI) practices. Real-World Impact – We provide unique learning and development opportunities internationally. Fusing Tech & Leadership – We work with the latest technologies and methodologies and offer first class learning programs at all levels. Multidisciplinary Teamwork - Our teams include data scientists, engineers, project managers, UX and visual designers who work collaboratively to enhance performance. Innovative Work Culture – Creativity, insight and passion come from being balanced. We cultivate a modern work environment through an emphasis on wellness, insightful talks and training sessions. Striving for Diversity – With colleagues from over 40 nationalities, we recognize the benefits of working with people from all walks of life. Your Qualifications and Skills Bachelor's degree in computer science or related field; master's degree is a plus 3+ years of relevant work experience Experience with at least one of the following technologies: Python, Scala, Java, C++ & ability to write production code and object-oriented programming Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL / NoSQL is very much expected Commercial client- facing project experience is helpful, including working in close-knit teams Additional expertise with Python testing frameworks, data validation and data quality frameworks, feature engineering, chunking, document ingestion, graph data structures (i.e., Neo4j), basic K8s (manifests, debugging, docker, Argo Workflows), MLflow deployment and usage, generative AI frameworks (LangChain), GPUs is a plus Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets Proven ability in clearly communicating complex solutions; strong attention to detail Understanding of information security principles to ensure compliant handling and management of client data Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks Experience with cloud development platforms such as AWS, Azure, Google (and appropriate Bash/Shell scripting) Good to have experience in CI/CD using GitHub Actions or CircleCI or any other CI/CD tech stack and experience in end to end pipeline development including application deployment

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Scientist Location: Bengaluru, Gurugram Experience: 5+ Years About the Role: We are looking for an experienced Data Scientist to develop and deploy AI/ML and GenAI solutions across large datasets using modern frameworks and cloud infrastructure. You’ll work closely with cross-functional teams to translate business requirements into impactful data products. Key Responsibilities: Collaborate with software engineers, stakeholders, and domain experts to define data-driven solutions. Develop, implement, and deploy AI/ML, NLP/NLU, and deep learning models. Work on preprocessing, analyzing large datasets, and deriving actionable insights. Evaluate and optimize models for performance, efficiency, and scalability. Deploy solutions to cloud platforms such as Azure (preferred), AWS, or GCP. Monitor production models and iterate for continuous improvement. Document processes, results, and best practices. Must-Have Skills: Bachelor's/Master’s in Computer Science, Data Science, Engineering, or related field. Strong programming in Python and SQL . Experience with Scikit-learn, TensorFlow, PyTorch , etc. Knowledge of ETL tools like Azure Data Factory , Databricks , Data Lake . Solid foundation in mathematics, probability, and statistics . Exposure to GenAI , Vector Databases , and LLMs . Experience working with cloud infrastructure ( Azure preferred ). Good-to-Have Skills: Experience with Flask , Django , or Streamlit . Knowledge of MLOps tools : MLFlow, Kubeflow, CI/CD. Familiarity with Docker , Kubernetes for model/container deployment. #teceze

Posted 2 weeks ago

Apply

0 years

0 Lacs

Sadar, Uttar Pradesh, India

On-site

Summary: We are seeking a talented and motivated AI Engineer to join our team and focus on building cutting-edge Generative AI applications. The ideal candidate will possess a strong background in data science, machine learning, and deep learning, with specific experience in developing and fine-tuning Large Language Models (LLMs) and Small Language Models (SLMs). You should be comfortable managing the full lifecycle of AI projects, from initial design and data handling to deployment and production monitoring. A foundational understanding of software engineering principles is also required to collaborate effectively with engineering teams and ensure robust deployments. Responsibilities: Design, develop, and implement Generative AI solutions, including applications leveraging Retrieval-Augmented Generation (RAG) techniques. Fine-tune existing Large Language Models (LLMs) and potentially develop smaller, specialized language models (SLMs) for specific tasks. Manage the end-to-end lifecycle of AI model development, including data curation, feature extraction, model training, validation, deployment, and monitoring. Research and experiment with state-of-the-art AI/ML/DL techniques to enhance model performance and capabilities. Build and maintain scalable production pipelines for AI models. Collaborate with data engineering and IT teams to define deployment roadmaps and integrate AI solutions into existing systems. Develop AI-powered tools to solve business problems, such as summarization, chatbots, recommendation systems, or code assistance. Stay updated with the latest advancements in Generative AI, machine learning, and deep learning. Qualifications: Proven experience as a Data Scientist, Machine Learning Engineer, or AI Engineer with a focus on LLMs and Generative AI. Strong experience with Generative AI techniques and frameworks (e.g., RAG, Fine-tuning, Langchain, LlamaIndex, PEFT, LoRA). Solid foundation in machine learning (e.g., Regression, Classification, Clustering, XGBoost, SVM) and deep learning (e.g., ANN, LSTM, RNN, CNN) concepts and applications. Proficiency in Python and relevant libraries (e.g., Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch). Experience with data science principles, including statistics, hypothesis testing, and A/B testing. Experience deploying and managing models in production environments (e.g., using platforms like AWS, Databricks, MLFlow). Familiarity with data handling and processing tools (e.g., SQL, Spark/PySpark). Basic understanding of software engineering practices, including version control (Git) and containerization (Docker). Bachelor's or master’s degree in computer science, Artificial Intelligence, Data Science, or a related quantitative field. Preferred Skills: Experience building RAG-based chatbots or similar applications. Experience developing custom SLMs. Experience with MLOps principles and tools (e.g., MLFlow, Airflow). Experience migrating ML workflows between cloud platforms. Familiarity with vector databases and indexing techniques. Experience with Python web frameworks (e.g., Django, Flask). Experience building and integrating APIs (e.g., RESTful APIs). Basic experience with front-end development or UI building for showcasing AI applications. Qualifications Bachelorʼs or Masterʼs degree in Computer Science, Engineering, or a related discipline.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

NOTE: Timings: 4-6 hours overlap with GMT (Arabian Standard Time) Position 1: MLOps Engineer Experience: 5–8 years Contract Duration: 6 months (extendable) Budget: 1 Lakh / Month fixed Location: Remote Must-Have Skills (5–8 Years): Strong experience with MLOps tools and frameworks (e.g., MLflow, Kubeflow, SageMaker) Proficiency in CI/CD pipeline creation for ML workflows Hands-on experience with containerization tools such as Docker and orchestration using Kubernetes Good understanding of cloud platforms (AWS/GCP/Azure) for deploying and managing ML models Expertise in monitoring, logging, and model versioning Familiarity with data pipeline orchestration tools (Airflow, Prefect, etc.) Strong Python programming skills and experience with ML libraries (scikit-learn, TensorFlow, PyTorch, etc.) Knowledge of model performance tuning and retraining strategies Ability to collaborate closely with Data Scientists, DevOps, and Engineering teams

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

New Delhi, Delhi, India

On-site

We are looking for a skilled and driven Machine Learning Engineer with 3+ years of experience to join our core AI development team. The role involves building and optimising machine learning models, developing data pipelines, and contributing to AI-powered features used in financial crime detection, document intelligence, and compliance analytics. This is a high-impact role for someone passionate about applying machine learning in real-world applications and eager to grow in a fast-paced environment. Responsibilities Build, test, and deploy machine learning models for various data-driven use cases Develop scalable data preprocessing and feature engineering pipelines Collaborate with product and engineering teams to integrate AI functionalities into production systems Analyze model performance, run experiments, and fine-tune algorithms for accuracy and speed Contribute to maintaining version-controlled ML workflows and reusable modules Assist in preparing training datasets from structured and unstructured sources Support model monitoring, feedback loop integration, and continuous improvements Technical Skills Required Strong foundation in Python and applied machine learning Experience with one or more ML frameworks such as scikit-learn, TensorFlow, or PyTorch Understanding of supervised, unsupervised, and basic NLP techniques Familiarity with building data pipelines and working with large datasets (Pandas, NumPy, SQL) Exposure to model deployment concepts, APIs, and inference Basic knowledge of Git, Docker, and cloud environments (AWS/GCP/Azure) Preferred (Nice to Have) Experience with document data (PDFs, scanned files) or OCR tools Exposure to transformer-based models or LLMs Familiarity with ML lifecycle tools (MLflow, DVC, Weights & Biases) Prior experience in fintech, risk analysis, or fraud detection systems Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field 3+ years of hands-on experience in ML/AI development roles Compensation & Growth Competitive compensation aligned with industry benchmarks ESOPs and performance bonuses for high performers Exposure to real-world AI use cases with domain experts in fintech, compliance, and GenAI Opportunity to grow into senior research or MLOps roles based on performance and interest

Posted 2 weeks ago

Apply

5.0 years

2 - 6 Lacs

Hyderābād

On-site

Overview: As a key member of the team, you will be responsible for building and maintaining the infrastructure, tools, and workflows that enable the efficient, reliable, and secure deployment of LLMs in production environments. You will collaborate closely with data scientists, Data Engineers and product teams to ensure seamless integration of AI capabilities into our core systems. Responsibilities: Design and implement scalable model deployment pipelines for LLMs, ensuring high availability and low latency. Build and maintain CI/CD workflows for model training, evaluation, and release. Monitor and optimize model performance, drift, and resource utilization in production. Manage cloud infrastructure (e.g., AWS, GCP, Azure) and container orchestration (e.g., Kubernetes, Docker) for AI workloads. Implement observability tools to track system health, token usage, and user feedback loops. Ensure security, compliance, and governance of AI systems, including access control and audit logging. Collaborate with cross-functional teams to align infrastructure with product goals and user needs. Stay current with the latest in MLOps and GenAI tooling and drive continuous improvement in deployment practices. Define and evolve the architecture for GenAI systems, ensuring alignment with business goals and scalability requirements Qualifications: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or a related technical field. 5 to 7 years of experience in software engineering, DevOps, and 3+ years in machine learning infrastructure roles. Hands-on experience deploying and maintaining machine learning models in production, ideally including LLMs or other deep learning models. Proven experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Strong programming skills in Python, with experience in ML libraries (e.g., TensorFlow, PyTorch, Hugging Face). Proficiency in CI/CD pipelines for ML workflows Experience with MLOps tools: MLflow, Kubeflow, DVC, Airflow, Weights & Biases. Knowledge of monitoring and observability tools

Posted 2 weeks ago

Apply

0 years

4 - 6 Lacs

Gurgaon

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose – the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Assistant V ice President, Databricks Squad Delivery lead The Databricks Delivery Lead will oversee the end-to-end delivery of Databricks-based solutions for clients, ensuring the successful implementation, optimization, and scaling of big data and analytics solutions. This role will drive the adoption of Databricks as the preferred platform for data engineering and analytics, while managing a cross-functional team of data engineers and developers. . Responsibilities Lead and manage Databricks-based project delivery, ensuring that all solutions are designed, developed, and implemented according to client requirements, best practices, and industry standards. Act as the subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaborate with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads. Serve as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. Maintain effective communication with stakeholders, providing regular updates on project status, risks, and achievements. Oversee the setup, deployment, and optimization of Databricks workspaces, clusters, and pipelines. Ensure that Databricks solutions are optimized for cost and performance, utilizing best practices for data storage, processing, and querying. Continuously evaluate the effectiveness of the Databricks platform and processes, suggesting improvements or new features that could enhance delivery efficiency and effectiveness. Drive innovation within the team, introducing new tools, technologies, and best practices to improve delivery quality. . Qualifications we seek in you! Minimum Q ualifications / Skills Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s or MBA preferred). Relevant years in IT services with experience specifically in Databricks and cloud-based data engineering. Preferred Q ualifications / Skills Proven experience in leading end-to-end delivery of data engineering or analytics solutions on Databricks. Strong experience in cloud technologies (AWS, Azure, GCP), data pipelines, and big data tools. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies. Expertise in data engineering concepts, including ETL, data lakes, data warehousing, and distributed computing. Preferred Certifications: Databricks Certified Associate or Professional. Cloud certifications (AWS Certified Solutions Architect, Azure Data Engineer, or equivalent). Certifications in data engineering, big data technologies, or project management (e.g., PMP, Scrum Master). Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Gurugram Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 14, 2025, 11:20:58 PM Unposting Date Jan 11, 2026, 3:20:58 AM Master Skills List Digital Job Category Full Time

Posted 2 weeks ago

Apply

4.0 years

4 - 8 Lacs

Gurgaon

On-site

About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Title: Senior/Lead Data Scientist Experience Required: 4 + Years About the Role: We are seeking a skilled and innovative Machine Learning Engineer with 4+ years of experience to join our AI/ML team. The ideal candidate will have strong expertise in Computer Vision , Generative AI (GenAI) , and Deep Learning , with a proven track record of deploying models in production environments using Python, MLOps best practices, and cloud platforms like Azure ML. Key Responsibilities: Design, develop, and deploy AI/ML models for Computer Vision and GenAI use cases Build, fine-tune, and evaluate deep learning architectures (CNNs, Transformers, Diffusion models, etc.) Collaborate with product and engineering teams to integrate models into scalable pipelines and applications Manage the complete ML lifecycle using MLOps practices (versioning, CI/CD, monitoring, retraining) Develop reusable Python modules and maintain high-quality, production-grade ML code Work with Azure Machine Learning Services for training, inference, and model management Analyze large-scale datasets, extract insights, and prepare them for model training and validation Document technical designs, experiments, and decision-making processes Required Skills & Experience: 4–5 years of hands-on experience in Machine Learning and Deep Learning Strong experience in Computer Vision tasks such as object detection, image segmentation, OCR, etc. Practical knowledge and implementation experience in Generative AI (LLMs, diffusion models, embeddings) Solid programming skills in Python , with experience using frameworks like PyTorch , TensorFlow , OpenCV , Transformers (HuggingFace) , etc. Good understanding of MLOps concepts , model deployment, and lifecycle management Experience with cloud platforms , preferably Azure ML , for scalable model training and deployment Familiarity with data labeling tools, synthetic data generation, and model interpretability Strong problem-solving, debugging, and communication skills Good to Have: Experience with NLP , multimodal learning , or 3D computer vision Familiarity with containerization tools (Docker, Kubernetes) Experience in building end-to-end ML pipelines using MLflow, DVC, or similar tools Exposure to CI/CD pipelines for ML projects and working in agile development environments Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science , or a related field

Posted 2 weeks ago

Apply

5.0 years

4 - 9 Lacs

Noida

On-site

Posted On: 14 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We are looking for a skilled AI/ML Ops Engineer to join our team to bridge the gap between data science and production systems. You will be responsible for deploying, monitoring, and maintaining machine learning models and data pipelines at scale. This role involves close collaboration with data scientists, engineers, and DevOps to ensure that ML solutions are robust, scalable, and reliable. Key Responsibilities: Design and implement ML pipelines for model training, validation, testing, and deployment. Automate ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar. Deploy machine learning models to production environments (cloud). Monitor model performance, drift, and data quality in production. Collaborate with data scientists to improve model robustness and deployment readiness. Ensure CI/CD practices for ML models using tools like Jenkins, GitHub Actions, or GitLab CI. Optimize compute resources and manage model versioning, reproducibility, and rollback strategies. Work with cloud platforms AWS and containerization tools like Kubernetes (AKS). Ensure compliance with data privacy and security standards (e.g., GDPR, HIPAA). Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps, Data Engineering, or ML Engineering roles. Strong programming skills in Python; familiarity with R, Scala, or Java is a plus. Experience with automating ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, or XGBoost. Experience with ML model monitoring and alerting frameworks (e.g., Evidently, Prometheus, Grafana). Familiarity with data orchestration and ETL/ELT tools (Airflow, dbt, Prefect). Preferred Qualifications: Experience with large-scale data systems (Spark, Hadoop). Knowledge of feature stores (Feast, Tecton). Experience with streaming data (Kafka, Flink). Experience working in regulated environments (finance, healthcare, etc.). Certifications in cloud platforms or ML tools. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and collaboration with cross-functional teams. Adaptable and eager to learn new technologies. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Development Tools and Management - Development Tools and Management - CI/CD DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Data Science and Machine Learning - Data Science and Machine Learning - Gen AI (LLM, Agentic AI, Gen AI enable tools like Github Copilot) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Programming Language - Other Programming Language - Scala Big Data - Big Data - Hadoop Big Data - Big Data - SPARK Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Senior DevOps (Azure, Terraform, Kubernetes) Engineer Location: Remote (Initial 2–3 months in Abu Dhabi office, and then remote from India) T ype: Full-time | Long-term | Direct Client Hire Client: Abu Dhabi Government About The Role Our client, UAE (Abu Dhabi) Government, is seeking a highly skilled Senior DevOps Engineer (with skills on Azure, Terraform, Kubernetes, Argo) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud Azure DevOps practices. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps, AKS Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: argo,terraform,kubernetes,azure

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

TCS HIRING !!! Role : Data Scientist Required Technical Skill Set : Data Science Experience: 3-8 years Locations: Kolkata,Hyd,Bangalore,Chennai,Pune Job Description: Must-Have** (Ideally should not be more than 3-5) Proficiency in Python or R for data analysis and modeling. Strong understanding of machine learning algorithms (regression, classification, clustering, etc.). Experience with SQL and working with relational databases. Hands-on experience with data wrangling, feature engineering, and model evaluation techniques. Experience with data visualization tools like Tableau, Power BI, or matplotlib/seaborn. Strong understanding of statistics and probability. Ability to translate business problems into analytical solutions. Good-to-Have Experience with deep learning frameworks (TensorFlow, Keras, PyTorch). Knowledge of big data platforms (Spark, Hadoop, Databricks). Experience deploying models using MLflow, Docker, or cloud platforms (AWS, Azure, GCP). Familiarity with NLP, computer vision, or time series forecasting. Exposure to MLOps practices for model lifecycle management. Understanding of data privacy and governance concepts.

Posted 2 weeks ago

Apply

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: postgresql,docker,llms and modern nlp techniques,machine learning,computer vision,tensorflow,scikit-learn,pytorch,llm technologies,python,nlp,aws,ml, ai,sql,ml ops,azure,javascript,software/data engineering,kubernetes,mongodb,python, pytorch/tensorflow, and scikit-learn

Posted 2 weeks ago

Apply

0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

We are looking for a versatile and highly skilled Data Analyst / AI Engineer to join our innovative team. This unique role combines the strengths of a data scientist with the capabilities of an AI engineer, allowing you to dive deep into data, extract meaningful insights, and then build and deploy cutting-edge Machine Learning, Deep Learning, and Generative AI models. You will play a crucial role in transforming raw data into strategic assets and intelligent applications. Key Responsibilities: · Data Analysis & Insight Generation: o Perform in-depth Exploratory Data Analysis (EDA) to identify trends, patterns, and anomalies in complex datasets. o Clean, transform, and prepare data from various sources for analysis and model development. o Apply statistical methods and hypothesis testing to validate findings and support data-driven decision-making. o Create compelling and interactive BI dashboards (e.g., Power BI, Tableau) to visualize data insights and communicate findings to stakeholders. · Machine Learning & Deep Learning Model Development: o Design, build, train, and evaluate Machine Learning models (e.g., regression, classification, clustering) to solve specific business problems. o Develop and optimize Deep Learning models, including CNNs for computer vision tasks and Transformers for Natural Language Processing (NLP). o Implement feature engineering techniques to enhance model performance. · Generative AI Implementation: o Explore and experiment with Large Language Models (LLMs) and other Generative AI techniques. o Implement and fine-tune LLMs for specific use cases (e.g., text generation, summarization, Q&A). o Develop and integrate Retrieval Augmented Generation (RAG) systems using vector databases and embedding models. o Apply Prompt Engineering best practices to optimize LLM interactions. o Contribute to the development of Agentic AI systems that leverage multiple tools and models. Required Skills & Experience: o Data Science & Analytics: o Strong proficiency in Python and its data science libraries (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn). o Proven experience with Exploratory Data Analysis (EDA) and statistical analysis. o Hands-on experience developing BI Dashboards using tools like Power BI or Tableau. o Understanding of data warehousing and data lake concepts. o Machine Learning: o Solid understanding of various ML algorithms (e.g., Regression, Classification, Clustering, Tree-based models). o Experience with model evaluation, validation, and hyperparameter tuning. o Deep Learning: o Proficiency with Deep Learning frameworks such as TensorFlow, Keras, or PyTorch . o Experience with CNNs (Convolutional Neural Networks) and computer vision concepts (e.g., OpenCV, object detection). o Familiarity with Transformer architectures for NLP tasks. o Generative AI: o Practical experience with Large Language Models (LLMs). o Understanding and application of RAG (Retrieval Augmented Generation) systems. o Experience with Fine-tuning LLMs and Prompt Engineering. o Familiarity with frameworks like LangChain or LlamaIndex. o Problem-Solving: Excellent analytical and problem-solving skills with a strong ability to approach complex data challenges. Good to Have: o Experience with cloud-based AI/ML services (e.g., Azure ML, AWS SageMaker, Google Cloud AI Platform). o Familiarity with MLOps principles and tools (e.g., MLflow, DVC, CI/CD for models). o Experience with big data technologies (e.g., Apache Spark). Educational Qualification: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Please share your resume to the mail id: careers@appfabs.in

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Greetings from Stanco Solutions a leading and fast growing IT Services and IT Consulting Company. Currently we are hiring for Project Manager for our company Full Time/Permanent Role with Immediate to 15 Days Joiners only. Total Experience: 12 - 15 years Notice Period: Immediate to 15 Days Location: Chennai Work Mode: Work From Office Interview Mode: Virtual Interested candidates kindly share your updated CV to ruban.p@stancosolutions.com and for further queries kindly call to +91-8248551519. 1. Project Planning & Execution Define project scope, schedule, milestones, and deliverables . Prepare project charters, plans, and WBS (Work Breakdown Structure) . Create and manage Agile sprint plans and ensure iteration goals are met . 2. Stakeholder & Team Management Act as a bridge between business, development, QA, and infrastructure teams . Manage internal and external stakeholder expectations . Coordinate with cross-functional teams for on-time and on-budget delivery . 3. Technical Oversight & Risk Management Provide technical input and oversight on architecture and build activities. Track and mitigate technical, resource, and delivery risks proactively . Drive resolution of blockers, dependencies, and escalations. 4. Progress Tracking & Communication Use tools like JIRA or Azure DevOps for project tracking and burndown charts . Generate daily/weekly status reports, dashboards, and executive summaries . Present status updates and delivery health reports to senior management . 5. Quality, Compliance & Governance Ensure QA, UAT, and release processes are followed. Drive process improvement initiatives across the team. Maintain audit trails, change logs, and sign-off documentation . Primary Skills: Project Management (Agile/Scrum/Waterfall/Hybrid Models) Software Delivery Lifecycle (SDLC) Ownership Working Knowledge of Full Stack Development React.js (Frontend) Java (Backend APIs) MySQL (Database Queries, Data Models) Agile Planning Tools (JIRA, Azure DevOps, Trello, ClickUp) CI/CD Implementation Understanding (Jenkins, GitHub Actions, Azure Pipelines) Resource Planning, Sprint Management, and Backlog Grooming Risk & Issue Management, Change Requests, RCA Documentation Project Tracking, Budgeting & Estimation Stakeholder Communication & Cross-Functional Team Coordination Status Reporting to Senior Leadership and C-level Executives Secondary Skills: Exposure to AI/ML Project Lifecycle & Tools (MLFlow, Vertex AI, Azure ML – conceptual level) Cloud Platform Understanding (Azure, AWS, or GCP) DevOps Awareness (Version Control, Pipelines, Release Cycles) Quality Assurance Coordination & Release Sign-off Processes Team Coaching, Conflict Management & People Leadership Documentation & Process Improvement

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Urgently Hiring for AI/ML Role We are seeking an experienced AI/ML Engineer with 3–5 years of hands-on experience in building and deploying machine learning solutions in the fintech and spend management domain. You will work on real-time forecasting, intelligent document processing (invoices/receipts), fraud detection, and other AI-powered features that enhance our finance intelligence platform. This role demands expertise in both time series forecasting and computer vision, as well as a solid understanding of how ML applies to enterprise finance operations. Key Responsibilities: Design, train, and deploy ML models for spend forecasting, budget prediction, expense categorization, and risk scoring. Build and optimize OCR-based invoice and receipt parsing systems using computer vision and NLP techniques. Implement time-series models (Prophet, ARIMA, LSTM, XGBoost, etc.) for forecasting trends in financial transactions, expenses, and vendor payments. Work on intelligent document classification, key-value extraction, and line-item detection from unstructured financial documents (PDFs, scanned images). Collaborate with product and finance teams to define high-impact AI use cases and deliver business-ready solutions. Integrate ML pipelines into production using scalable tools and platforms (Docker, CI/CD, cloud services). Monitor model performance post-deployment, conduct drift analysis, and implement retraining strategies. Required Skills & Qualifications: Core Machine Learning Strong knowledge of supervised and unsupervised ML techniques applied to structured and semi-structured financial data Experience in time-series analysis and forecasting algorithms such as: ARIMA, SARIMA Facebook Prophet XGBoost for regression LSTM / GRU models for sequential data Proficiency in Python and key libraries: scikit-learn, Pandas, NumPy, StatsModels, PyTorch, TensorFlow. Computer Vision & Document AI Hands-on experience with OCR tools such as Tesseract, Google Vision API, or AWS Textract. Knowledge of document layout analysis and field-level extraction using OpenCV, LayoutLM, or Google Document AI. Familiarity with annotation tools (Label Studio, CVAT) and post-processing OCR outputs for structured data extraction. Deployment & Engineering Experience in exposing ML models via Flask or FastAPI. Model packaging and deployment with Docker, version control with Git, and ML lifecycle tools like MLflow or DVC. Working knowledge of cloud platforms (AWS/GCP/Azure) and integrating models with backend microservices. Data & Domain Understanding of financial documents: invoices, receipts, expense reports, and GL data. Ability to work with tabular, image-based, and PDF-based financial datasets. SQL proficiency and familiarity with financial databases or ERP systems is a plus.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview: ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation. ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation. Job Responsibilities Architect, design, and optimize big data and AI/ML solutions on the Databricks platform. Develop and implement highly scalable ETL pipelines for processing large datasets. Lead the adoption of Apache Spark for distributed data processing and real-time analytics. Define and enforce data governance, security policies, and compliance standards. Optimize data lakehouse architectures for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights. Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks. Automate data workflows using CI/CD pipelines and infrastructure-as-code practices. Ensure data integrity, quality, and reliability across all data processes. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. 8+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark. Proficiency in SQL, Python, or Scala for data processing and analytics. Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering. Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture. Hands-on experience with CI/CD tools and DevOps best practices. Familiarity with data security, compliance, and governance best practices. Strong problem-solving and analytical skills in a fast-paced environment. Preferred Qualifications: Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer). Hands-on experience with MLflow, Feature Store, or Databricks SQL. Exposure to Kubernetes, Docker, and Terraform. Experience with streaming data architectures (Kafka, Kinesis, etc.). Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker). Prior experience working with retail, e-commerce, or ad-tech data platforms. We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies