Home
Jobs

713 Mlflow Jobs - Page 27

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 10 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Looking for MLOps Engineer to build and scale ML pipelines on AWS using SageMaker, EKS, Docker, and Terraform. Drive CI/CD, model tracking, automation & GPU-based training. Required Candidate profile Sagemaker, EKS, EC2, IAM, Cloudwatch, ECR, Docker, Kubernetes, Helm, Jenkins, Terraform, Kubeflow, Mlflow, Wandb Volcano Scheduler

Posted 1 month ago

Apply

0 - 10 years

0 Lacs

Chennai, Tamil Nadu

Work from Office

Indeed logo

Company Description Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We’re looking for a Jr AI Security Architect to join our growing Security Architecture team. This role will support the design, implementation, and protection of AI/ML systems, models, and datasets. The ideal candidate is passionate about the intersection of artificial intelligence and cybersecurity, and eager to contribute to building secure-by-design AI systems that protect users, data, and business integrity. Key Responsibilities Secure AI Model Development Partner with AI/ML teams to embed security into the model development lifecycle, including during data collection, model training, evaluation, and deployment. Contribute to threat modeling exercises for AI/ML pipelines to identify risks such as model poisoning, data leakage, or adversarial input attacks. Support the evaluation and implementation of model explainability, fairness, and accountability techniques to address security and compliance concerns. Develop and train internal models for security purposes Model Training & Dataset Security Help design controls to ensure the integrity and confidentiality of training datasets, including the use of differential privacy, data validation pipelines, and access controls. Assist in implementing secure storage and version control practices for datasets and model artifacts. Evaluate training environments for exposure to risks such as unauthorized data access, insecure third-party libraries, or compromised containers. AI Infrastructure Hardening Work with infrastructure and MLOps teams to secure AI platforms (e.g., MLFlow, Kubeflow, SageMaker, Vertex AI) including compute resources, APIs, CI/CD pipelines, and model registries. Contribute to security reviews of AI-related deployments in cloud and on-prem environments. Assist in automating security checks in AI pipelines, such as scanning for secrets, validating container images, and enforcing secure permissions. Secure AI Integration in Products Participate in the review and assessment of AI/ML models embedded into customer-facing products to ensure they comply with internal security and responsible AI guidelines. Help develop misuse detection and monitoring strategies to identify model abuse (e.g., prompt injection, data extraction, hallucination exploitation). Support product security teams in designing guardrails and sandboxing techniques for generative AI features (e.g., chatbots, image generators, copilots). Knowledge Sharing & Enablement Assist in creating internal training and security guidance for data scientists, engineers, and developers on secure AI practices. Help maintain documentation, runbooks, and security checklists specific to AI/ML workloads. Stay current on emerging AI security threats, industry trends, and tools; contribute to internal knowledge sharing. Qualifications 3-4 years of experience in LLM and 7-10 years of experience in cybersecurity, machine learning, or related fields. Familiarity with ML frameworks (e.g., PyTorch, TensorFlow) and MLOps tools (e.g., MLFlow, Airflow, Kubernetes). Familiarity with AI models and Supplychain risks Understanding of common AI/ML security threats and mitigations (e.g., model inversion, adversarial examples, data poisoning). Experience working with cloud environments (AWS, GCP, Azure) and securing workloads. Some knowledge of responsible AI principles, privacy-preserving ML, or AI compliance frameworks is a plus. Soft Skills Strong communication skills to collaborate across engineering, data science, and product teams. A continuous learning mindset and willingness to grow in both AI and security domains. Problem-solving approach with a focus on practical, scalable solutions. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.

Posted 1 month ago

Apply

0.0 - 4.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Data Engineer Experience: 5+ Years Location : Gurugram (On-site) Notice Period: Immediate to 15 Days (Preferred) Key Responsibilities: Data ingestion from source through ADF. Batch and Realtime loads to Bronze layer or Silver respectively basis level of transformation required according to use-case. Data residing in ADLS Gen 2 is processed by Databricks compute and Delta Live Tables Cleaned data is available in Silver layer and is further transformed according to business relevance to form data marts in the Final gold layer. Cleaned data is ingested in to MLFlow (optional) for machine learning / predictive use-cases. Data from MLFlow can be provisioned through APIs to provide scoring or other data use cases involving model serving etc. Data of Gold layer can further be consumed by Databricks SQL to provide SQL endpoints to BI apps. Databricks SQL provides data to Power BI Delta sharing (optional) can be used to provision data securely to 3rd parties. Mandatory Skills: Azure Data Bricks, data warehouse, Data Modeling, Delta Table, Azure SQL, Data Lake Job Type: Full-time Pay: ₹700,000.00 - ₹1,800,000.00 per month Schedule: Day shift Monday to Friday Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your notice period? What is your current CTC? Experience: Azure Data Engineer: 5 years (Required) Azure Data Factory: 3 years (Required) Databricks: 4 years (Required) Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

About this opportunity: A&AI (SL IT & ADM) team is currently seeking a versatile and motivated DevOps Engineer (with expertise in Kubernetes and Cloud Infrastructure) to join the AI/ML team. This role will be pivotal in managing multiple platforms and systems, focusing on Kubernetes, ELK/Opensearch, and various DevOps tools to ensure seamless data flow for our machine learning and data science initiatives. The ideal candidate should have a strong foundation in Python programming, experience with Elasticsearch, Logstash, and Kibana (ELK), proficiency in MLOps, and expertise in machine learning model development and deployment. Additionally, familiarity with basic Spark concepts and visualization tools like Grafana and Kibana is desirable. What you will do: Design and implement robust AI/ML infrastructure using cloud services and Kubernetes to support machine learning operations (MLOps) and data processing workflows. Deploy, manage, and optimize Kubernetes clusters specifically tailored for AI/ML workloads, ensuring optimal resource allocation and scalability across different network configurations. Develop and maintain CI/CD pipelines tailored for continuous training and deployment of machine learning models, integrating tools like Kubeflow, MLflow, ArgoFlow or TensorFlow Extended (TFX). Collaborate with data scientists to oversee the deployment of machine learning models and set up monitoring systems to track their performance and health in production. Design and implement data pipelines for large-scale data ingestion, processing, and analytics essential for machine learning models, utilizing distributed storage and processing technologies such as Hadoop, Spark, and Kafka. . The skills you bring: Extensive experience with Kubernetes and cloud services (AWS, Azure, GCP, private cloud) with a focus on deploying and managing AI/ML environments. Strong proficiency in scripting and automation using languages like Python, Bash, or Perl. Experience with AI/ML tools and frameworks (TensorFlow, PyTorch, Scikit-learn) and MLOps tools (Kubeflow, MLflow, TFX). In-depth knowledge of data pipeline and workflow management tools, distributed data processing (Hadoop, Spark), and messaging systems (Kafka, RabbitMQ). Expertise in implementing CI/CD pipelines, infrastructure as code (IaC), and configuration management tools. Familiarity with security standards and data protection regulations relevant to AI/ML projects. Proven ability to design and maintain reliable and scalable infrastructure tailored for AI/ML workloads. Excellent analytical, problem-solving, and communication skills. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766746

Posted 1 month ago

Apply

4 years

0 Lacs

Mumbai, Maharashtra

Remote

Indeed logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your role Understand client business requirement and provide suitable approach to solve the business problem in different domains. Develop strategies to solve problems in logical yet creative ways Helping & Managing teams of data scientist and produce project deliverables Processing, cleansing and verifying the integrity of data used for analysis. Data mining and performing EDAs using state-of-the-art methods. Selecting features, building and optimizing classifiers / regressors, etc. using machine learning & deep learning techniques. Help enhance data collection procedures to include information that is relevant for building analytical systems Doing ad-hoc analysis and presenting results in a clear manner. Follow and help improve with ‘Analytics-Delivery’ procedures using detailed documentations. Create custom reports and presentations accompanied by strong data visualization and storytelling Present analytical conclusions to senior officials in a company and other stakeholders Your Profile Education: Engineering graduates (B.E/ B.Tech)/ Other Graduates (Mathematics/ Statistics/ Operational Research/ Computer Science & Applications). Minimum 4 years relevant experience Experience in data management and visualization task Experience with common data science toolkits, such as NumPy, Pandas, StatModel, Scikit learn, SciPy, NLTK, Spacy, Gensim, OpenCV, MLFlow, Tensorflow, Pytorch etc. Excellent understanding / hand-on experience in NLP and computer vision. Good understanding / hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc.). Knowledge/ Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost, H2O etc. Knowledge/ hand-on experience Exposure on large scale dataset and respective tool & framework like Hadoop, Pyspark, etc Knowledge/ hand-on experience on deployment of model in production & deployment. Hands on experience in GenAI. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion.

Posted 1 month ago

Apply

10 - 12 years

30 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 10-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages -Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years experience with Elasticsearch, SQL, NoSQL,Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor.

Posted 1 month ago

Apply

0.0 - 4.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Job Description- Artificial Intelligence Developer/AI Developer Company Profile : APPWRK IT Solutions Pvt. Ltd. is an India-based IT Service Provider founded in the year 2012, intent on associating with the right people at the right place to achieve the best possible results. Since 2012, APPWRK IT Solutions has been continuously developing web applications for businesses across the globe. We have successfully delivered numerous projects in the IT field, covering Mobile, Desktop, and Web applications. We are well known for our expertise, performance, and commitment to delivering high-quality solutions. As an IT services and product-based company, we cater to various industries, providing cutting-edge technology solutions tailored to our clients' needs. We take pride in working with Fortune 500 clients like Unilever and have a strong global presence in the US, Netherlands, and Australia . In India, we operate from Chandigarh and Delhi , offering top-tier IT solutions to businesses worldwide. Our team of skilled professionals is dedicated to driving innovation, efficiency, and digital transformation for our clients Job Description: Artificial Intelligence Developer/AI Developer Location : Mohali Experience : 1-5 years Job Overview: About the Role: We are looking for a highly skilled and motivated Artificial Intelligence Developer/AI Developer Key Responsibilities: Develop, train, validate, and deploy AI/ML models for real-world use cases. Collaborate with cross-functional teams to define and implement end-to-end AI systems . Design, develop, and maintain scalable APIs and microservices to expose ML models. Build and integrate user-friendly front-end interfaces to support AI-powered applications. Write clean, maintainable, and testable code for both client-side and server-side components. Implement MLOps practices including model versioning, CI/CD pipelines, monitoring, and logging. Conduct performance tuning, A/B testing, and model evaluation for continuous improvement. Stay updated with emerging technologies and best practices in AI and software engineering. Required Skills and Qualifications: Bachelor’s/Master’s degree in Computer Science, Artificial Intelligence, or related fields. 2–4 years of experience in AI/ML development using Python , TensorFlow , PyTorch , or similar. Strong understanding of data structures , algorithms , machine learning concepts , and model evaluation . Proficiency in backend frameworks like Django , Flask , or FastAPI . Solid front-end development skills with React , Angular , or Vue.js . or At least with HTML, CSS, JavaScript Experience with RESTful APIs , GraphQL , containerization (Docker/Kubernetes) , and cloud platforms (AWS, GCP, or Azure) . Familiarity with databases (SQL and NoSQL) and version control (Git). Knowledge of AI deployment pipelines , model serving , and monitoring tools . Preferred Skills: Experience with LLMs (e.g., GPT models, LangChain, Hugging Face Transformers) . Knowledge of data annotation tools , NLP , or Computer Vision . Exposure to CI/CD pipelines , MLOps tools (MLflow, Kubeflow). Contributions to open-source projects or AI hackathons are a plus. Soft Skills: Excellent problem-solving and analytical abilities. Strong communication and collaboration skills. Ability to take ownership and deliver in a fast-paced environment. If you are a motivated developer looking for a rewarding opportunity, apply now to be part of our growing team! Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): How many years of work experience do you have in AI/ML Development ? Work Location: In person

Posted 1 month ago

Apply

4 - 8 years

8 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Job description Job description Job Summary: We are looking for a results-driven and innovative Data Scientist with 46 years of experience in data analysis, machine learning, and product optimization. The ideal candidate will have a strong foundation in Python, SQL, and cloud services, along with practical exposure to GenAI, LLMs, and MLOps frameworks. You will be responsible for building scalable data pipelines, developing machine learning models, and solving real-world business problems with data-driven solutions. Key Responsibilities: Design and deploy LLM-powered solutions (e.g., RAG, LangChain, Vector DB) to enhance business processes. Build and fine-tune traditional ML models (Random Forest, Decision Trees) for predictive analytics. Optimize LLM performance using LoRA fine-tuning and post-training quantization. Develop and deploy containerized AI applications using Docker and FastAPI. Collaborate with agents like CrewAI and LangSmith to automate document processing and data extraction workflows. Implement Python & SQL-based ETL pipelines for real-time data ingestion. Design dashboards and KPI monitoring tools using Metabase to enable data-driven decision-making. Create data consumption triggers and automate reporting for international stakeholders. Moderate large-scale live virtual data science classes and provide operational support. Required Skills: Programming: Python, SQL, FastAPI ML & AI: Random Forest, Decision Trees, Clustering, PCA, DL, NLP, Transformers, Gen AI Frameworks/Tools: LangChain, MLflow, Kubeflow, CrewAI, LangSmith DevOps: Docker, Git Databases: MySQL, PostgreSQL, MongoDB Cloud: AWS (S3, EC2) Visualization: Metabase Other: Experience with LLM fine-tuning and quantization Role & responsibilitiesRole & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

About the RoleWe’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.

Posted 1 month ago

Apply

18 - 22 years

0 Lacs

Hyderabad, Telangana, India

Hybrid

Linkedin logo

DATAECONOMY is one of the fastest-growing Data & AI company with global presence. We are well-differentiated and are known for our Thought leadership, out-of-the-box products, cutting-edge solutions, accelerators, innovative use cases, and cost-effective service offerings. We offer products and solutions in Cloud, Data Engineering, Data Governance, AI/ML, DevOps and Blockchain to large corporates across the globe. Strategic Partners with AWS, Collibra, cloudera, neo4j, DataRobot, Global IDs, tableau, MuleSoft and Talend. Job Title: Delivery HeadExperience: 18 - 22 YearsLocation: HyderabadNotice Period: Immediate Joiners are preferred Job Summary:We are seeking a seasoned Technical Delivery Manager with deep expertise in Data Engineering and Data Science to lead complex data initiatives and drive successful delivery across cross-functional teams. The ideal candidate brings a blend of strategic thinking, technical leadership, and project execution skills, along with hands-on knowledge of modern data platforms, machine learning, and analytics frameworks. Key Responsibilities:Program & Delivery ManagementOversee end-to-end delivery of large-scale data programs, ensuring alignment with business goals, timelines, and quality standards.Manage cross-functional project teams including data engineers, data scientists, analysts, and DevOps personnel.Ensure agile delivery through structured sprint planning, backlog grooming, and iterative delivery.Technical LeadershipProvide architectural guidance and review of data engineering pipelines and machine learning models.Evaluate and recommend modern data platforms (e.g., Snowflake, Databricks, Azure Data Services, AWS Redshift, GCP BigQuery).Ensure best practices in data governance, quality, and compliance (e.g., GDPR, HIPAA).Stakeholder & Client ManagementAct as the primary point of contact for technical discussions with clients, business stakeholders, and executive leadership.Translate complex data requirements into actionable project plans.Present technical roadmaps and delivery status to stakeholders and C-level executives.Team Development & MentoringLead, mentor, and grow a high-performing team of data professionals.Conduct code and design reviews; promote innovation and continuous improvement. Key Skills and Qualifications:Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field.18–22 years of total IT experience with at least 8–10 years in data engineering, analytics, or data science.Proven experience delivering enterprise-scale data platforms, including:ETL/ELT pipelines using tools like Apache Spark, Airflow, Kafka, Talend, or Informatica.Data warehouse and lake architectures (e.g., Snowflake, Azure Synapse, AWS Redshift, Delta Lake).Machine Learning lifecycle management (e.g., model training, deployment, MLOps using MLflow, SageMaker, or Vertex AI).Strong knowledge of cloud platforms (Azure, AWS, or GCP).Deep understanding of Agile, Scrum, and DevOps principles.Excellent problem-solving, communication, and leadership skills. Preferred Certifications (Optional but Beneficial):PMP, SAFe Agile, or similar project management certifications.Certifications in cloud platforms (e.g., AWS Certified Data Analytics, Azure Data Engineer Associate).Certified Scrum Master (CSM) or equivalent.

Posted 1 month ago

Apply

7 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Position Overview: The Databricks Data Engineering Lead role is ideal a highly skilled Databricks Data Engineer who will architect and lead the implementation of scalable, high-performance data pipelines and platforms using the Databricks Lakehouse ecosystem. The role involves managing a team of data engineers, establishing best practices, and collaborating with cross-functional stakeholders to unlock advanced analytics, AI/ML, and real-time decision-making capabilities. Key Responsibilities: Lead the design and development of modern data pipelines, data lakes, and lakehouse architectures using Databricks and Apache Spark. Manage and mentor a team of data engineers, providing technical leadership and fostering a culture of excellence. Architect scalable ETL/ELT workflows to process structured and unstructured data from various sources (cloud, on-prem, streaming). Build and maintain Delta Lake tables and optimize performance for analytics, machine learning, and BI use cases. Collaborate with data scientists, analysts, and business teams to deliver high-quality, trusted, and timely data products. Ensure best practices in data quality, governance, lineage, and security, including the use of Unity Catalog and access controls. Integrate Databricks with cloud platforms (AWS, Azure, or GCP) and data tools (Snowflake, Kafka, Tableau, Power BI, etc.). Implement CI/CD pipelines for data workflows using tools such as GitHub, Azure DevOps, or Jenkins. Stay current with Databricks innovations and provide recommendations on platform strategy and architecture improvements Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Experience:7+ years of experience in data engineering, including 3+ years working with Databricks and Apache Spark. Proven leadership experience in managing and mentoring data engineering teams. Skills:Proficiency in PySpark, SQL, and experience with Delta Lake, Databricks Workflows, and MLflow. Strong understanding of data modeling, distributed computing, and performance tuning. Familiarity with one or more major cloud platforms (Azure, AWS, GCP) and cloud-native services. Experience implementing data governance and security in large-scale environments. Experience with real-time data processing using Structured Streaming or Kafka. Knowledge of data privacy, security frameworks, and compliance standards (e.g., PCIDSS, GDPR). Exposure to machine learning pipelines, notebooks, and ML Ops practices. Certifications :Databricks Certified Data Engineer or equivalent certification.

Posted 1 month ago

Apply

0 - 1 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

Job Title Data Scientist (Generative AI) Job Description Job title: Data Scientist (Generative AI) Your role: We are seeking a highly skilled Data Scientist with Generative AI expertise to join our team. In this role, you will develop, fine-tune, and optimize generative AI models to drive innovation across various applications, including text, image, audio, and video generation. You will collaborate with cross-functional teams, including machine learning engineers, software developers, and product managers, to create cutting-edge AI solutions. You're the right fit if: Bachelor’s or master’s degree in computer science, AI, Data Science, Machine Learning, or a related field. 10+ years of experience in machine learning, deep learning, or AI research, with at least 1 year of hands-on experience in generative AI. Strong proficiency in Python and ML frameworks like TensorFlow, PyTorch, or JAX. Experience working with LLMs , diffusion models, GANs, VAEs, or transformers. Knowledge of natural language processing (NLP) , computer vision , or multimodal AI applications. Familiarity with prompt engineering, fine-tuning, and RLHF (Reinforcement Learning from Human Feedback). Experience in cloud-based AI solutions and working with APIs (e.g., OpenAI, Hugging Face, Stability AI). Strong problem-solving skills and the ability to work in a fast-paced, research-driven environment. Experience with vector databases (e.g., FAISS, Pinecone) and retrieval-augmented generation (RAG) . Hands-on experience with MLOps tools (e.g., MLflow, Kubeflow, Docker, Kubernetes). Understanding of ethical AI and bias mitigation in generative models. Strong publication record or contributions to open-source AI projects. How we work together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. Onsite roles require full-time presence in the company’s facilities. Field roles are most effectively done outside of the company’s main facilities, generally at the customers’ or suppliers’ locations. Indicate if this role is an office/field/onsite role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business . Discover our rich and exciting history . Learn more about our purpose . If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our commitment to diversity and inclusion here . #LI-EU #LI-Hybrid

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Hybrid

Linkedin logo

Description And Requirements CareerArc Code CA-PS Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications, using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities, using mainly Python Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8+ years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems.Contribution to open-source ML/AI/IoT projects or relevant publications.Experience with Agile development methodology and best practice in unit testinExperience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. < Back to search results BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 8,047,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. (Returnship@BMC) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Min salary 6,035,850 Mid point salary 8,047,800 Max salary 10,059,750

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

DXFactor is a US-based tech company working with customers across the globe. We are a Great place to work with certified company. We are looking for candidates for Data Scientist (3-6 Yrs exp) We have our presence in:USIndia (Ahmedabad, Bangalore) Location: Ahmedabad (work from office)Website: www.DXFactor.com Designation: Data Scientist (3-6Yrs exp) Key Responsibilities Design, develop, and implement end-to-end data science pipelines Gather, clean, and preprocess data from various sources Build, train, and evaluate machine learning models using appropriate algorithms Design and optimize prompts for large language models (LLMs) and other generative AI systems Develop solutions leveraging Agnetic GenAI architecture for business applications Deploy models to production environments and monitor their performance Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions Document processes, methodologies, and results Stay up-to-date with the latest trends and advancements in data science and machine learning Requirements Bachelor's degree in Computer Science, Statistics, Mathematics, or related field (Master's preferred) Minimum 3+ years of professional experience in data science or related roles Strong proficiency in Python and data science libraries (pandas, numpy, scikit-learn, etc.) Experience with common machine learning algorithms, including: Decision Trees , Linear/Logistic Regression, Random Forest Other supervised and unsupervised learning methods Demonstrated experience with prompt engineering for LLMs and generative AI models In-depth knowledge of Agentic GenAI architecture and applications Knowledge of deep learning frameworks (TensorFlow, PyTorch) Ability to build APIs using Flask, Django, or FastAPI Familiarity with MLOps practices and tools (MLflow, Kubeflow, etc.) Experience with cloud platforms (AWS, GCP, Azure) Strong analytical and problem-solving skills Excellent communication skills and ability to explain complex concepts to non-technical stakeholders

Posted 1 month ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

What makes Techjays an inspiring place to work: At Techjays, we are driving the future of artificial intelligence with a bold mission to empower businesses worldwide by helping them build AI solutions that transform industries. As an established leader in the AI space, we combine deep expertise with a collaborative, agile approach to deliver impactful technology that drives meaningful change. Our global team consists of professionals who have honed their skills at leading companies such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. With engineering teams across the globe, we deliver tailored AI software and services to clients ranging from startups to large-scale enterprises. Be part of a company that’s pushing the boundaries of digital transformation. At Techjays, you’ll work on exciting projects that redefine industries, innovate with the latest technologies, and contribute to solutions that make a real-world impact. Join us on our journey to shape the future with AI. We are looking for a mid-level AI Implementation Engineer with a strong Python background to join our AI initiatives team. In this role, you will help design and develop production-ready systems that combine information retrieval, vector databases, and large language models (LLMs) into scalable Retrieval-Augmented Generation (RAG) pipelines.You’ll work closely with AI researchers, backend engineers, and data teams to bring generative AI use cases to life across multiple domains. Key Responsibilities: Develop and maintain scalable Python services that implement AI-powered retrieval and generation workflows.Build and optimize vector-based retrieval pipelines using tools like FAISS, Pinecone, or Weaviate.Integrate LLMs via APIs (e.g., OpenAI, Hugging Face) using orchestration frameworks such as LangChain, LlamaIndex, etc.Collaborate on system architecture, API design, and data flow to support RAG systems.Monitor, test, and improve the performance and accuracy of AI features in production.Work in cross-functional teams with product, data, and ML stakeholders to deploy AI solutions quickly and responsibly. Requirements: 3–5 years of hands-on experience in Python development, with focus on backend or data-intensive systems.Experience with information retrieval concepts and tools (e.g., Elasticsearch, vector search engines).Familiarity with LLM integration or orchestration tools (LangChain, LlamaIndex, etc.).Working knowledge of RESTful API development, microservices, and containerization (Docker).Solid software engineering practices including Git, testing, and CI/CD pipelines. Nice to have: Exposure to prompt engineering or fine-tuning LLMs.Experience deploying cloud-based AI applications (AWS, GCP, or Azure).Familiarity with document ingestion pipelines and unstructured data processing.Understanding of MLOps tools and practices (e.g., MLflow, Airflow). What we offer: Best-in-class packages.Paid holidays and flexible time-off policies.Casual dress code and a flexible working environment.Opportunities for professional development in an engaging, fast-paced environment.Medical insurance covering self and family up to 4 lakhs per person.Diverse and multicultural work environment.Be part of an innovation-driven culture with ample support and resources to succeed.

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Armakuni (AWS Premier Partner), a trusted partner in helping organizations leverage cloud-native technologies to achieve agility, scalability, and resilience. Cloud-Native Transformation : Guiding businesses through adopting cloud-native practices to optimize their operationsDevOps and Platform Engineering : Building robust, secure, and scalable platforms tailored to your unique needs.Data Engineering : With our data engineering service you can transform your data into actionable business outcomeGen AI & ML : Empower Business strategy with AI-Driven growth through Gen-AI ServicesTailored Solutions: Providing expert consulting services to deliver long-term success for your teams. Get more insights about us by visiting us on our website https://www.armakuni.com/ Explore our all AWS Capabilities over here : Armakuni - Armakuni AWS Marketplace currently hiring Lead AI/ML Engineer to join our team! The following are requirements of skills set: Skills : Gen AI, LLMs, AWS Sagemaker, AWS Bedrock, MLOps-Mlflow, Kuberflow Experience : 5-8 Years Location : Ahmedabad Roles & Responsibilities : Model Development & Deployment: Design, build, and deploy machine learning models to production environments using AWS SageMaker, AWS Bedrock, and other AWS AI/ML services. Develop and deploy Generative AI applications using various LLMs. Work on data preprocessing, exploratory data analysis (EDA), feature engineering, and model training pipelines using various ML pipelines and AWS services. Implement best practices for model versioning, monitoring, and scaling in production. Collaboration & Communication Collaborate with data engineers to translate business requirements into ML models and work on data science-related problems as needed. Work with DevOps and Data Engineering teams to deploy and maintain models in production, with a strong understanding of MLOps. Present findings and progress to stakeholders, ensuring alignment with business objectives. Innovation & Improvement Continuously explore and implement new AWS AI/ML services (e.g., Amazon Rekognition, Comprehend, Lex, Polly, AWS Bedrock, and open-source LLMs) to improve existing solutions. Optimize models for performance, scalability, and cost-effectiveness in the AWS cloud environment. Documentation & Compliance: Maintain detailed documentation of model architectures, training processes, and deployment pipelines. Ensure compliance with industry standards and data privacy regulations.

Posted 1 month ago

Apply

4 years

0 Lacs

Greater Kolkata Area

Linkedin logo

Job Summary In this role, you will lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI. Job Description Company Overview: Outsourced is a leading ISO certified India & Philippines offshore outsourcing company that provides dedicated remote staff to some of the world's leading international companies. Outsourced is recognized as one of the Best Places to Work and has achieved Great Place to Work Certification. We are committed to providing a positive and supportive work environment where all staff can thrive. As an Outsourced staff member, you will enjoy a fun and friendly working environment, competitive salaries, opportunities for growth and development, work-life balance, and the chance to share your passion with a team of over 1000 talented professionals. Job Responsibilities Lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI, establishing best practices for scalability, reliability, and maintainability while actively contributing to relevant open source communitiesDesign and develop robust, production-grade features focused on AI trustworthiness, including model monitoringDrive technical decision-making around system architecture, technology selection, and implementation strategies for key MLOps components, with a focus on open source technologiesDefine and implement technical standards for model deployment, monitoring, and validation pipelines, while mentoring team members on MLOps best practices and engineering excellenceCollaborate with product management to translate customer requirements into technical specifications, architect solutions that address scalability and performance challenges, and provide technical leadership in customer-facing discussionsLead code reviews, architectural reviews, and technical documentation efforts to ensure high code quality and maintainable systems across distributed engineering teamsIdentify and resolve complex technical challenges in production environments, particularly around model serving, scaling, and reliability in enterprise Kubernetes deploymentsPartner with cross-functional teams to establish technical roadmaps, evaluate build-vs-buy decisions, and ensure alignment between engineering capabilities and product visionProvide technical mentorship to team members, including code review feedback, architecture guidance, and career development support while fostering a culture of engineering excellence Required Qualifications 5+ years of software engineering experience, with at least 4 years focusing on ML/AI systems in production environmentsStrong expertise in Python, with demonstrated experience building and deploying production ML systemsDeep understanding of Kubernetes and container orchestration, particularly in ML workload contextsExtensive experience with MLOps tools and frameworks (e.g., KServe, Kubeflow, MLflow, or similar)Track record of technical leadership in open source projects, including significant contributions and community engagementProven experience architecting and implementing large-scale distributed systemsStrong background in software engineering best practices, including CI/CD, testing, and monitoringExperience mentoring engineers and driving technical decisions in a team environment Preferred Qualifications Experience with Red Hat OpenShift or similar enterprise Kubernetes platformsContributions to ML/AI open source projects, particularly in the MLOps/GitOps spaceBackground in implementing ML model monitoringExperience with LLM operations and deployment at scalePublic speaking experience at technical conferencesAdvanced degree in Computer Science, Machine Learning, or related fieldExperience working with distributed engineering teams across multiple time zones What we Offer Health Insurance: We provide medical coverage up to 20 lakh per annum, which covers you, your spouse, and a set of parents. This is available after one month of successful engagement.Professional Development: You'll have access to a monthly upskill allowance of ₹5000 for continued education and certifications to support your career growth.Leave Policy: Vacation Leave (VL): 10 days per year, available after probation. You can carry over or encash up to 5 unused days.Casual Leave (CL): 8 days per year for personal needs or emergencies, available from day one.Sick Leave: 12 days per year, available after probation.Flexible Work Hours or Remote Work Opportunities – Depending on the role and project.Outsourced Benefits such as Paternity Leave, Maternity Leave, etc.

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities):Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure.Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation.Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM.Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS.Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline.Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems.Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch.Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs.Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs.Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control.Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink.Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs.Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances.Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration).Participate in incident response, performance tuning, and continuous system improvement. Good to Have:Hands-on experience with ML lifecycle tools like MLflow and KubeflowPrevious involvement in production-grade AI/ML projects or data-intensive systemsStartup or high-growth tech company experience Qualifications:Bachelor’s degree in Computer Science, Information Technology, or a related field.5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role.Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling.Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits:Competitive Salary 💸Support for continual learning (free books and online courses) 📚Leveling Up Opportunities 🌱Diverse team environment 🌍

Posted 1 month ago

Apply

5 - 8 years

0 Lacs

Pune, Maharashtra, India

Linkedin logo

Role Overview We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving. Key Responsibilities Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems. Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker. Automate ML workflows using CI/CD best practices and tools. Ensure model reproducibility, governance, and performance tracking. Monitor deployed models for data drift, model decay, and performance metrics. Implement robust versioning and model registry systems. Apply security, performance, and compliance best practices across ML systems. Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities. Required Skills & Qualifications 4+ years of experience in Software Engineering or MLOps, preferably in a production environment. Proven experience with AWS services, especially AWS Sagemaker for model development and deployment. Working knowledge of AWS DataZone (preferred). Strong programming skills in Python, with exposure to R, Scala, or Apache Spark. Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes). Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools. Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Solid understanding of DevOps and cloud-native infrastructure practices. Excellent problem-solving skills and the ability to work collaboratively across teams. (ref:hirist.tech)

Posted 1 month ago

Apply

5 - 10 years

10 - 20 Lacs

Pune

Hybrid

Naukri logo

Experienced in AI Ops Engineer role focuses on deploying, monitoring, and scaling AI/GenAI models using MLOps, CI/CD, cloud (AWS/Azure/GCP), Python, Kubernetes, MLflow, security, and automation.

Posted 1 month ago

Apply

8 - 12 years

25 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learning models used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored to business needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Documentandcommunicatetechnicalspecifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis on explainability and responsible AI practices. Qualifications Required Educational Background: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience: 8-12 years of experience in machine learning, with 2+ years in designing and implementing generative AI models or working specifically with transformer-based models. Skills and Experience Required GenerativeAI: Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning: Algorithms, Deep Learning, Neural Networks Programming: Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, Tensor Flow MLOps: Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) Data Engineering: Data Preprocessing, Feature Engineering, Data Cleaning

Posted 1 month ago

Apply

7 - 11 years

50 - 60 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Indeed logo

KEY ACCOUNTABILITIES Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to define ML problems and objectives. Research, design, and implement machine learning algorithms and models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Analyse and preprocess large-scale datasets for training and evaluation. Train, test, and optimize ML models for accuracy, scalability, and performance. Deploy ML models in production using cloud platforms and/or MLOps best practices. Monitor and evaluate model performance over time, ensuring reliability and robustness. Document findings, methodologies, and results to share insights with stakeholders. QUALIFICATIONS, EXPERIENCE AND SKILLS Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field (graduation within the last 12 months or upcoming). Proficiency in Python or a similar language, with experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong foundation in linear algebra, probability, statistics, and optimization techniques. Familiarity with machine learning algorithms (e.g., decision trees, SVMs, neural networks) and concepts like feature engineering, overfitting, and regularization. Hands-on experience working with structured and unstructured data using tools like Pandas, SQL, or Spark. Ability to think critically and apply your knowledge to solve complex ML problems. Strong communication and collaboration skills to work effectively in diverse teams. Additional Skills (Good to have) Experience with cloud platforms (e.g., AWS, Azure, GCP) and MLOps tools (e.g., MLflow, Kubeflow). Knowledge of distributed computing or big data technologies (e.g., Hadoop, Apache Spark). Previous internships, academic research, or projects showcasing your ML skills. Familiarity with deployment frameworks like Docker and Kubernetes. #LI-AA6

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: Account Executive About MESH WorksOur mission is to connect buyers & suppliers around the world. MESH is B2B SaaS product helping sourcing, procurement & supply chain teams to find global suppliers faster, reduce sourcing costs & streamline supplier discovery. Our audited database of contract manufacturers includes suppliers from 40+ countries & 35+ industries. We're looking for AI/ML Engineer to join our AI team, which works closely with product & tech teams on bringing AI to real-world problems & use cases for our customers. You can find more about us at www.meshworks.com Key ResponsibilitiesDevelop & fine-tune AI models for procurement, engineering and supplier intelligenceDesign multi-agent AI systems that interact & automate decision-makingIntegrate LLMs, retrieval-augmented generateion (RAG), and vector databases into our platformWork with Azure AI services to build scalable and secure AI solutionsOptimize AI model performance for accuracy, efficiency, and low latencyCollaborate with software engineers to deploy models into productionMonitor and improve AI performance using MLOps and feedback loops Requirements2+ years of experience in AI/ML engineering (startup or SaaS experience is a plus)Deep Expertise in:Large Language Models (LLMs) and Natural language ProcessingDeep learning frameworks (PyTorch, TensorFlow)Vector databases (Pinecone, Weaviate, FAISS)Python and AI libraries (Hugging Face, LangChain)Strong Understanding of Azure & OpenAI API IntegrationMulti-agent systems & AI orchestrationMLOps tools (MLFlow, Kubeflow, Airflow)Proven ability to work with both structured & unstructured dataExperience with cloud-based AI deployment

Posted 1 month ago

Apply

8 years

0 Lacs

Pune, Maharashtra

Work from Office

Indeed logo

Basic Information Country: India State: Maharashtra City: Pune Date Published: 05-May-2025 Job ID: 44647 Travel: You may occasionally be required to travel for business Secondary locations: IND Bangalore - Prestige Summit Looking for details about our benefits? You can learn more about them by clicking HERE Description and Requirements CareerArc Code CA-PS #LI-PS1 Hybrid: #LI-Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications, using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities, using mainly Python Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8+ years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems. Contribution to open-source ML/AI/IoT projects or relevant publications. Experience with Agile development methodology and best practice in unit testin Experience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page.

Posted 1 month ago

Apply

Exploring mlflow Jobs in India

The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for mlflow professionals.

Average Salary Range

The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Salaries may vary based on factors such as location, company size, and specific job requirements.

Career Path

A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager

With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.

Related Skills

In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)

Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.

Interview Questions

  • What is mlflow and how does it help in the machine learning lifecycle? (basic)
  • Explain the difference between tracking, projects, and models in mlflow. (medium)
  • How do you deploy a machine learning model using mlflow? (medium)
  • Can you explain the concept of model registry in mlflow? (advanced)
  • What are the benefits of using mlflow in a machine learning project? (basic)
  • How do you manage experiments in mlflow? (medium)
  • What are some common challenges faced when using mlflow in a production environment? (advanced)
  • How can you scale mlflow for large-scale machine learning projects? (advanced)
  • Explain the concept of artifact storage in mlflow. (medium)
  • How do you compare different machine learning models using mlflow? (medium)
  • Describe a project where you successfully used mlflow to streamline the machine learning process. (advanced)
  • What are some best practices for versioning machine learning models in mlflow? (advanced)
  • How does mlflow support hyperparameter tuning in machine learning models? (medium)
  • Can you explain the role of mlflow tracking server in a machine learning project? (medium)
  • What are some limitations of mlflow that you have encountered in your projects? (advanced)
  • How do you ensure reproducibility in machine learning experiments using mlflow? (medium)
  • Describe a situation where you had to troubleshoot an issue with mlflow and how you resolved it. (advanced)
  • How do you manage dependencies in a mlflow project? (medium)
  • What are some key metrics to track when using mlflow for machine learning experiments? (medium)
  • Explain the concept of model serving in the context of mlflow. (advanced)
  • How do you handle data drift in machine learning models deployed using mlflow? (advanced)
  • What are some security considerations to keep in mind when using mlflow in a production environment? (advanced)
  • How do you integrate mlflow with other tools in the machine learning ecosystem? (medium)
  • Describe a situation where you had to optimize a machine learning model using mlflow. (advanced)

Closing Remark

As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies