Home
Jobs

722 Mlflow Jobs - Page 20

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Azentio Azentio Software incorporated in 2020 at Singapore, has been carved out of 3i Infotech, Beyontec Technologies and Path Solutions. Azentio Software provides mission critical, vertical-specific software products for customers in banking, financial services and insurance verticals and includes key products such as KASTLE™ (Universal Lending), AMLOCK™ (Anti-Money Laundering & Compliance software suite), iMal, PREMIA™ Astra (Core Insurance software), ORION™ (Enterprise Resource Planning software) and MFUND Plus™ (Asset Management platform). Azentio has over 800 customers in more than 60 countries, with a team of over 2,300 employees across offices in 12 countries (and growing) globally and is wholly owned by Funds advised by Apax Partners. Azentio offers a comprehensive range of products – serving core operations to modern digital needs – for the financial services industry. Our deep domain knowledge and solutions in financial services extend across insurance, retail and corporate lending, Islamic Banking, anti-money laundering and asset management. In addition, Azentio proudly serves mid-market enterprises across the Middle East, Africa, Asia Pacific, and India with a comprehensive ERP solution . At Azentio, we believe that growth is a continuous journey. We believe that each step of this journey must be taken by committing to excellence - excellence in our products, our services, our ideas, and our peopl e. Job Title: Senior Engineer AI /MLYears of Experience: 5+ ye arsLocation: Chen nai Role Summ ary:We are seeking experienced and innovat ive Senior AI/ML Engi neer to join our core product engineering team, focused on embedding intelligence into our next-generat ion Enterprise SaaS ERP application. You will architect and design scalable, performant ML models for key ERP workflows across multiple functional domains and while ensuring security, governance, and multi-tenant data isolat ion. This role requires good expertis e in Python, PyTorch, Apache Airflow, MLflow, and Fa stAPI. This role combines deep AI/ML skills with ERP domain knowledge and open-source engine e ring What will you do? Design ML workflows using tools like MLflow, Apache Airflow, and Kube flow.B uild ERP-centric ML fea tures such as employee attrition prediction, payment fraud detection, and dynamic supplier sco ring.Maintain data quality and model observability using open-source tools like Evidently, Great Expectations, and Promet heus.Champion open standards for model deployment, CI/CD (GitHub Actions, Tekton), and containerization (Docker, Helm, Arg oCD).Build NLP and forecasting models u sing HuggingFace, Prophet, and Scikit- learnStrong proficiency in Python and open-source ML framewo rks (TensorFlow, Py T orch) Tech Stack:Languages: Python, Oracle, , BashML/DS: PyTorch, Scikit-learn, HuggingFace, PandasMLOps: MLflow, Airflow, DVC, Great Expect ationsDevOps: Kubernetes, Docker, Helm, GitHub A ctionsData: PostgreSQL, ClickHouse, Kafka What skills re quired?Bachelor’s or master’s degree in computer science, Information Systems, Data Science, or a related field.8 + years of experience in AI-ML for large-scale enterprise applic ations.Understand ing of ERP functional domainsExcellent communication, leadership, and stakeholder management skills. Preferred Qualifi cations:Experien ce with Ku be rnetes , Doc ker, or serverless data archi t ectures.Experien ce with AI/ML for a nalytics (predictive analytics, anomaly det ection).Expo sure to event-driven archi tectu res and stream pr ocessing (Kafka, K inesis).Certification in Pytho n, AI-ML What we Aim for?Azentio aims to be the leading provider of Banking, Financial Services & Insurance (BFSI) & Enterprise Resource Planning (ERP) software products in Asia Pacific, Middle East & Africa & United States. We will achieve this by:· Providing world class software products, built on the latest tech nologies.· Providing best in class customer service, built on a deep understanding of our domains and local nuances.· Being an employer of choice, attracting high qualit y talent.· Achieving top quartile growth and margins. Azentio Core Values: We work as one, Col laborate without boundaries, and win together.We w ork with Uncompromising Integ rity and Accoun tability .Customer is at the core of all th at we d o.We are Diverse and Inclusive. We treat our people, our customers and our wider commu nitywith Respect and Care.We Inno vate, we Exce l and we Grow Tog ether.We Give Back to our communities through our business and our people​ .We t ake Pride in all that we do and tog ether we Enjoy th e journey Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chandigarh, Chandigarh

On-site

Indeed logo

Job Title: Experienced AI Developer Location: Chandigarh Job Type: Full-Time Experience Level: Mid to Senior-Level Job Summary: As an AI Developer, you will be responsible for designing, developing, and deploying machine learning and deep learning models. You will work closely with our data science, product, and engineering teams to integrate AI capabilities into our software applications. Key Responsibilities: Design and implement AI/ML models tailored to business requirements. Train, fine-tune, and evaluate models using datasets from various domains. Integrate AI solutions into web or mobile applications. Collaborate with cross-functional teams to define AI strategies. Optimize models for performance, scalability, and reliability. Stay updated with the latest advancements in AI/ML technologies and frameworks. Deploy models to production environments using tools like Docker, Kubernetes, or cloud services (AWS/GCP/Azure). Write clean, maintainable, and well-documented code. Required Skills and Qualifications: 3+ years of experience in AI/ML development. Strong proficiency in Python and popular ML libraries (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with NLP, computer vision, or recommendation systems. Experience with data preprocessing, feature engineering, and model evaluation. Familiarity with REST APIs and microservices architecture. Solid understanding of AI ethics, bias mitigation, and responsible AI development. Preferred Qualifications: Experience with large language models (e.g., GPT, LLaMA, Claude). Knowledge of AI deployment tools like MLflow, SageMaker, or Vertex AI. Experience with prompt engineering or fine-tuning foundation models. Job Type: Full-time Pay: ₹35,000.00 - ₹70,000.00 per month Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Machine Learning Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Machine Learning Engineers with expertise in building, deploying, and optimizing AI models. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2-6 months, or freelancing Be a part of Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Most likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Design, optimize, and deploy machine learning models; implement feature engineering and scaling pipelines. Use deep learning frameworks (TensorFlow, PyTorch) and manage models in production (Docker, Kubernetes). Automate workflows, ensure model versioning, logging, and real-time monitoring; comply with security and regulations. Work with large-scale data, develop feature stores, and implement CI/CD pipelines for model retraining and performance tracking. Required Skills: Proficiency in machine learning, deep learning, and data engineering (Spark, Kafka). Expertise in MLOps, automation tools (Docker, Kubernetes, Kubeflow, MLflow, TFX), and cloud platforms (AWS, GCP, Azure). Strong knowledge of model deployment, monitoring, security, compliance, and responsible AI practices. Nice to Have: Experience with A/B testing, Bayesian optimization, and hyperparameter tuning. Familiarity with multi-cloud ML deployments and generative AI technologies (LLM fine-tuning, FAISS). What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Who we are Wayfair’s Advertising business is rapidly expanding, adding hundreds of millions of dollars in profits to Wayfair. We are building Sponsored Products, Display & Video Ad offerings that cater to a variety of Advertiser goals while showing highly relevant and engaging Ads to millions of customers. We are evolving our Ads Platform to empower advertisers across all sophistication levels to grow their business on Wayfair at a strong, positive ROI and are leveraging state of the art Machine Learning techniques. The Advertising Optimization & Automation Science team is central to this effort. We leverage machine learning and generative AI to streamline campaign workflows, delivering impactful recommendations on budget allocation, target Return on Ad Spend (tROAS), and SKU selection. Additionally, we are developing intelligent systems for creative optimization and exploring agentic frameworks to further simplify and enhance advertiser interactions. We are looking for Machine Learning Scientists to join the Advertising Optimization & Automation Science team. In this role, you will be responsible for the development of budget, tROAS and SKU recommendations and other machine learning capabilities supporting our ads business. You will work closely with other scientists, as well as members of our internal Product and Engineering teams, to apply your engineering and machine learning skills to solve some of our most impactful and intellectually challenging problems to directly impact Wayfair’s revenue. What you’ll do Design, build, deploy and refine large-scale machine learning models and algorithmic decision-making systems that solve real-world problems for customers Work cross-functionally with commercial stakeholders to understand business problems or opportunities and develop appropriately scoped analytical solutions Collaborate closely with various engineering, infrastructure, and machine learning platform teams to ensure adoption of best-practices in how we build and deploy scalable machine learning services Identify new opportunities and insights from the data (where can the models be improved? What is the projected ROI of a proposed modification?) Be obsessed with the customer and maintain a customer-centric lens in how we frame, approach, and ultimately solve every problem we work on. What you’ll need 3+ years of industry experience with a Bachelor/ Master’s degree or minimum of 1-2 years of industry experience with PhD in Computer Science, Mathematics, Statistics, or related field. Proficiency in Python or one other high-level programming language Solid hands-on expertise deploying machine learning solutions into production Strong theoretical understanding of statistical models such as regression, clustering and machine learning algorithms such as decision trees, neural networks, etc. Strong written and verbal communication skills Intellectual curiosity and enthusiastic about continuous learning Nice to have Experience with Python machine learning ecosystem (numpy, pandas, sklearn, XGBoost, etc.) and/or Apache Spark Ecosystem (Spark SQL, MLlib/Spark ML) Familiarity with GCP (or AWS, Azure), machine learning model development frameworks, machine learning orchestration tools (Airflow, Kubeflow or MLFlow) Experience in information retrieval, query/intent understanding, search ranking, recommender systems etc. Experience with deep learning frameworks like PyTorch, Tensorflow, etc. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Working as an AI/ML Engineer at Navtech, you will : Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly ? 2- 4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a masters degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. Well REALLY Love You If You Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different us : Navtech is a premier IT software and Services provider. Navtechs mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. Were a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join us as an MLOps Engineer at Barclays, where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as an MLOps Engineer you should have experience with: Strong programming skills in Python and experience with ML libraries (e.g., scikit-learn, TensorFlow, PyTorch) Experience with Jenkins, GitHub Actions, or GitLab CI/CD for automating ML pipelines Strong knowledge of Docker and Kubernetes for scalable deployments Deep experience with AWS services (e.g., SageMaker, Bedrock, Lambda, CloudFormation, Step Functions, S3 and IAM). Managing infrastructure for training and inference using AWS S3, EC2, EKS, and Step Functions. Experience with Infrastructure as Code (e.g., Terraform, AWS CDK) Familiarity with model lifecycle management tools (e.g., MLflow, SageMaker Model Registry). Strong understanding of DevOps principles applied to ML workflows. Some Other Highly Valued Skills May Include Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Knowledge of data engineering tools (e.g., Apache Airflow, Kafka, Spark). Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e.g., FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To build and maintain infrastructure platforms and products that support applications and data systems, using hardware, software, networks, and cloud computing platforms as required with the aim of ensuring that the infrastructure is reliable, scalable, and secure. Ensure the reliability, availability, and scalability of the systems, platforms, and technology through the application of software engineering techniques, automation, and best practices in incident response. Accountabilities Build Engineering: Development, delivery, and maintenance of high-quality infrastructure solutions to fulfil business requirements ensuring measurable reliability, performance, availability, and ease of use. Including the identification of the appropriate technologies and solutions to meet business, optimisation, and resourcing requirements. Incident Management: Monitoring of IT infrastructure and system performance to measure, identify, address, and resolve any potential issues, vulnerabilities, or outages. Use of data to drive down mean time to resolution. Automation: Development and implementation of automated tasks and processes to improve efficiency and reduce manual intervention, utilising software scripting/coding disciplines. Security: Implementation of a secure configuration and measures to protect infrastructure against cyber-attacks, vulnerabilities, and other security threats, including protection of hardware, software, and data from unauthorised access. Teamwork: Cross-functional collaboration with product managers, architects, and other engineers to define IT Infrastructure requirements, devise solutions, and ensure seamless integration and alignment with business objectives via a data driven approach. Learning: Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 16,700 stores in 31 countries, serving more than 9 million customers each day. The India Data & Analytics Global Capability Centre is an integral part of ACT’s Global Data & Analytics Team and the Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3–4 years of relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) and use big data technologies Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.) Show more Show less

Posted 2 weeks ago

Apply

7.0 - 12.0 years

18 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Senior Python with Machine Learning Engineer Level 3 for a US based IT Company based in Hyderabad. Candidates with minimum 7 Years of experience in python and machine learning can apply. Job Title : Senior Python with Machine Learning Engineer Level 3 Location : Hyderabad Experience : 7+ Years CTC : 28 LPA - 30 LPA Working shift : Day shift Job Description: We are seeking a highly skilled and experienced Python Developer with a strong background in Machine Learning (ML) to join our advanced analytics team. In this Level 3 role, you will be responsible for designing, building, and deploying robust ML pipelines and solutions across real-time, batch, event-driven, and edge computing environments. The ideal candidate will have extensive hands-on experience in developing and deploying ML workflows using AWS SageMaker , building scalable APIs, and integrating ML models into production systems. This role also requires a strong grasp of the complete ML lifecycle and DevOps practices specific to ML projects. Key Responsibilities: Develop and deploy end-to-end ML pipelines for real-time, batch, event-triggered, and edge environments using Python Utilize AWS SageMaker to build, train, deploy, and monitor ML models using SageMaker Pipelines, MLflow, and Feature Store Build and maintain RESTful APIs for ML model serving using FastAPI , Flask , or Django Work with popular ML frameworks and tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Ensure best practices across the ML lifecycle: data preprocessing, model training, validation, deployment, and monitoring Implement CI/CD pipelines tailored for ML workflows using tools like Bitbucket , Jenkins , Nexus , and AUTOSYS Design and maintain ETL workflows for ML pipelines using PySpark , Kafka , AWS EMR , and serverless architectures Collaborate with cross-functional teams to align ML solutions with business objectives and deliver impactful results Required Skills & Experience: 5+ years of hands-on experience with Python for scripting and ML workflow development 4+ years of experience with AWS SageMaker for deploying ML models and pipelines 3+ years of API development experience using FastAPI , Flask , or Django 3+ years of experience with ML tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Strong understanding of the complete ML lifecycle: from model development to production monitoring Experience implementing CI/CD for ML using Bitbucket , Jenkins , Nexus , and AUTOSYS Proficient in building ETL processes for ML workflows using PySpark , Kafka , and AWS EMR Nice to Have: Experience with H2O.ai for advanced machine learning capabilities Familiarity with containerization using Docker and orchestration using Kubernetes For further assistance contact/whatsapp : 9354909517 or write to hema@gist.org.in

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

About the Company: Transnational AI Private Limited is a next-generation AI-first company committed to building scalable, intelligent systems for digital marketplaces, insurance, employment, and healthcare sectors. We drive innovation through AI engineering, data science, and seamless platform integration powered by event-driven architectures. Role Summary: We are looking for a highly motivated AI Engineer with strong experience in Python, FastAPI, and event-driven microservice architecture. You will be instrumental in building intelligent, real-time systems that power scalable AI workflows across our platforms. This role combines deep technical engineering skills with a product-oriented mindset. Key Responsibilities: Architect and develop AI microservices using Python and FastAPI within an event-driven ecosystem. Implement and maintain asynchronous communication between services using message brokers like Kafka, RabbitMQ, or NATS. Convert AI/ML models into production-grade, containerized services integrated with streaming and event-processing pipelines. Design and document async REST APIs and event-based endpoints with comprehensive OpenAPI/Swagger documentation. Collaborate with AI researchers, product managers, and DevOps engineers to deploy scalable and secure services. Develop reusable libraries, automation scripts, and shared components for AI/ML pipelines. Maintain high standards for code quality, testability, and observability using unit tests, logging, and monitoring tools. Work within Agile teams to ship features iteratively with a focus on scalability, resilience, and fault tolerance. Required Skills and Experience: Proficiency in Python 3.x with a solid understanding of asynchronous programming (async/await). Hands-on experience with FastAPI; knowledge of Flask or Django is a plus. Experience building and integrating event-driven systems using Kafka, RabbitMQ, Redis Streams, or similar technologies. Strong knowledge of event-driven microservices, pub/sub models, and real-time data streaming architectures. Exposure to deploying AI/ML models using PyTorch, TensorFlow, or scikit-learn. Familiarity with containerization (Docker), orchestration (Kubernetes), and cloud platforms (AWS, GCP, Azure). Experience with unit testing frameworks such as PyTest, and observability tools like Prometheus, Grafana, or OpenTelemetry. Understanding of security principles including JWT, OAuth2, and API security best practices. Nice to Have: Experience with MLOps pipelines and tools like MLflow, DVC, or Kubeflow. Familiarity with Protobuf, gRPC, and async I/O with WebSockets. Prior work in real-time analytics, recommendation systems, or workflow orchestration (e.g., Prefect, Airflow). Contributions to open-source projects or active GitHub/portfolio. Educational Background: Bachelor’s or Master’s degree in Computer Science, Software Engineering, Artificial Intelligence, or a related technical discipline. Why Join Transnational AI: Build production-grade AI infrastructure powering real-world applications. Collaborate with domain experts and top engineers across marketplaces, insurance, and Workforce platforms. Flexible, remote-friendly environment with a focus on innovation and ownership. Competitive compensation, bonuses, and continuous learning support. Work on high-impact projects that influence how people discover jobs, get insured, and access personalized digital services. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Key Responsibilities Design and implement end-to-end ML pipelines using tools like MLflow, KubeFlow, DVC, and Airflow. Develop and maintain CI/CD workflows using Jenkins, CircleCI, Bamboo, or DataKitchen. Containerize applications using Docker and orchestrate them with Kubernetes or OpenShift. Collaborate with data scientists to productionize ML models and ensure reproducibility and scalability. Manage data versioning and lineage using Pachyderm or DVC. Monitor model performance and system health using Grafana and other observability tools. Write robust and maintainable code in Python, Go, R, or Julia. Automate workflows and scripting using Shell Scripts. Integrate with cloud storage solutions like Amazon S3 and manage data in RDBMS such as Oracle, SQL Server, or open-source alternatives. Ensure compliance with data governance and security best practices. Required Skills And Qualifications Strong programming skills in Python and familiarity with ML/DL libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with MLOps tools such as MLflow, KubeFlow, DVC, Airflow, or Pachyderm. Proficiency in Docker and container orchestration using Kubernetes or OpenShift. Experience with CI/CD tools like Jenkins, CircleCI, Bamboo, or DataKitchen. Familiarity with cloud storage and data management practices. Knowledge of SQL and experience with RDBMS (Oracle, SQL Server, or open-source). Experience with monitoring tools like Grafana. Strong understanding of DevOps and software engineering best practices. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Naukri logo

Job Requirements Education: Bachelors degree (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Masters degree preferred (MBA/MS/M.Tech in Computer Science or related field) Experience: 5–7 years in a Data Science/Advanced Analytics role Behavioral Skills: Delivery Excellence Business Orientation Social Intelligence Innovation and Agility Knowledge & Technical Skills: Functional analytics experience (Supply Chain, Marketing, Customer Analytics, etc.) Statistical modeling using tools such as R, Python, KNIME Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Experience building and evaluating machine learning models MLOps tools and practices (MLflow, DVC, Docker, etc.) Strong Python programming (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Experience with big data technologies (AWS, Azure, GCP, Hadoop, Spark) Familiarity with relational (MySQL, SQL Server) and non-relational (MongoDB, DynamoDB) databases BI and reporting tools (Power BI, Tableau, Alteryx) Proficiency with Microsoft Office applications (especially Excel) Roles & Responsibilities Analytics & Strategy: Analyze large-scale structured and unstructured data to develop insights and machine learning models across various business domains Apply statistical and machine learning techniques to generate value from operational, financial, and customer data Recommend best-fit algorithms and models with clear justifications for business use Leverage cloud platforms for modeling and big data analysis; utilize data visualization tools to communicate results Operational Excellence: Follow industry-standard coding practices and development lifecycles Formulate hypotheses, develop analytics frameworks, and bring structure to complex problems Collaborate with Data Engineering to maintain core data infrastructure and automate analytical processes Stakeholder Engagement: Work cross-functionally with business stakeholders, engineers, and visualization experts to deliver impactful projects Communicate complex models and results to non-technical stakeholders in a clear and compelling way

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Role: Fulltime- Permanent Location: Chennai Hybrid: 2 Days WFO, 3 Days WFH Preferably immediate joiners Key Skills: Strong GCP Cloud experience Proficiency in AI tools used to prepare and automate data pipelines and ingestion Apache Spark, especially with MLlib PySpark and Dask for distributed data processing Pandas and NumPy for local data wrangling Apache Airflow – schedule and orchestrate ETL/ELT jobs Google Cloud (BigQuery, Vertex AI) Python (most popular for AI and data tasks) About us OneMagnify is a global performance marketing company that blends brand strategy, data analytics, and cutting-edge technology to drive measurable business results. With a strong focus on innovation and collaboration, OneMagnify partners with clients to create personalized marketing solutions that enhance customer engagement and deliver real-time impact. The company is also known for its award-winning workplace culture, emphasizing employee growth and inclusion. 🌟 Why Join OneMagnify? Top Workplace: Consistently recognized in the U.S. & India for a great work culture. Cutting-Edge Tech: Work with modern tools like Databricks, Snowflake, Azure, and MLflow. Growth-Focused: Strong career paths, upskilling, and learning opportunities. Global Impact: Collaborate across teams on high-impact, data-driven projects. Great Benefits: Competitive salary, insurance, paid holidays, and more. Meaningful Work: Solve real-world business challenges with innovative solutions. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Who We Are The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What We Offer Interact with C-level at our clients on regular basis to drive their business towards impactful change Lead your team in creating new business solutions Seize opportunities at the client and at Metyis in our entrepreneurial environment Become part of a fast growing international and diverse team What You Will Do Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. What You’ll Bring 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. Tools & Technologies: Strong hands-on experience in Python/R and SQL. Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. Experience with data visualization tools in python like – Seaborn, Plotly. Good understanding of Git concepts. Good experience with data manipulation tools in python like Pandas and Numpy. Must have worked with scikit learn, NLTK, Spacy, transformers. Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. Core Competencies: Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. Exposure to tools like – Mlflow, model deployment, API integration, and CI/CD pipelines. Hands on experience with MLOps and model governance best practices in production environments. Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Good to have: Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG). In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

CSQ326R35 Mission The Machine Learning (ML) Practice team is a highly specialized customer-facing ML team at Databricks facing an increasing demand for Large Language Model (LLM)-based solutions. We deliver professional services engagements to help our customers build, scale, and optimize ML pipelines, as well as put those pipelines into production. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in LLMs, MLOps, and ML more broadly. This role can be remote based in India. The Impact You Will Have Develop LLM solutions on customer data such as RAG architectures on enterprise knowledge repos, querying structured data with natural language, and content generation Build, scale, and optimize customer data science workloads and apply best in class MLOps to productionize these workloads across a variety of domains Advise data teams on various data science such as architecture, tooling, and best practices Present at conferences such as Data+AI Summit Provide technical mentorship to the larger ML SME community in Databricks Collaborate cross-functionally with the product and engineering teams to define priorities and influence the product roadmap What we look for: Experience building Generative AI applications, including RAG, agents, text2sql, fine-tuning, and deploying LLMs, with tools such as HuggingFace, Langchain, and OpenAI 4+ years of hands-on industry data science experience, leveraging typical machine learning and data science tools including pandas, scikit-learn, and TensorFlow/PyTorch Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through ML [Preferred] Experience working with Databricks & Apache Spark to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗥𝗼𝗹𝗲: We looking for a highly experienced and innovative Senior DevSecOps & Solution Architect to lead the design, implementation, and security of modern, scalable solutions across cloud platforms. The ideal candidate will bring a unique blend of DevSecOps practices, solution architecture, observability frameworks, and AI/ML expertise — with hands-on experience in data and workload migration from on-premises to cloud or cloud-to-cloud. You will play a pivotal role in transforming and securing our enterprise-grade infrastructure, automating deployments, designing intelligent systems, and implementing monitoring strategies for mission-critical applications. 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽: • Own CI/CD strategy, automation pipelines, IaC (Terraform, Ansible), and container • orchestration (Docker, Kubernetes, Helm). • Champion DevSecOps best practices – embedding security into every stage of the SDLC. • Manage secrets, credentials, and secure service-to-service communication using Vault, • AWS Secrets Manager, or Azure Key Vault. • Conduct infrastructure hardening, automated compliance checks (CIS, SOC 2, ISO • 27001), and vulnerability management. • Solution Architecture: • Architect scalable, fault-tolerant, cloud-native solutions (AWS, Azure, or GCP). • Design end-to-end data flows, microservices, and serverless components. • Lead migration strategies for on-premises to cloud or cloud-to-cloud transitions, • ensuring minimal downtime and security continuity. • Create technical architecture documents, solution blueprints, BOMs, and migration • playbooks. • Observability & Monitoring: • Implement modern observability stacks: OpenTelemetry, ELK, Prometheus/Grafana, • DataDog, or New Relic. • Define golden signals (latency, errors, saturation, traffic) and enable APM, RUM, and log • aggregation. • Design SLOs/SLIs and establish proactive alerting for high-availability environments. 𝗔𝗜/𝗠𝗟 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 &𝗮𝗺𝗽; 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: • Integrate AI/ML into existing systems for intelligent automation, data insights, and • anomaly detection. • Collaborate with data scientists to operationalize models using MLflow, SageMaker, • Azure ML, or custom pipelines. • Work with LLMs and foundational models (OpenAI, Hugging Face, Bedrock) for POCs or • production-ready features. • Migration & Transformation: • Lead complex data migration projects across heterogeneous environments — legacy • systems to cloud, or inter-cloud (e.g., AWS to Azure). • Ensure data integrity, encryption, schema mapping, and downtime minimization • throughout migration efforts. • Use tools such as AWS DMS, Azure Data Factory, GCP Transfer Services, or custom • scripts for lift-and-shift and re-architecture. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 &𝗮𝗺𝗽; 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: • 10+ years in DevOps, cloud architecture, or platform engineering roles. • Expert in AWS and/or Azure – including IAM, VPC, EC2, Lambda/Functions, S3/Blob, API • Gateway, and container services (EKS/AKS). • Proficient in infrastructure as code: Terraform, CloudFormation, Ansible. • Hands-on with Kubernetes (k8s), Helm, GitOps workflows. • Strong programming/scripting skills in Python, Shell, or PowerShell. • Practical knowledge of AI/ML tools, libraries (TensorFlow, PyTorch, scikit-learn), and • model lifecycle management. • Demonstrated success in large-scale migrations and hybrid architecture. • Solid understanding of application security, identity federation, and compliance. Familiar with agile practices, project estimation, and stakeholder communication. 𝗡𝗶𝗰𝗲 𝘁𝗼 𝗛𝗮𝘃𝗲: • Certifications: AWS Solutions Architect, Azure Architect, Certified Kubernetes Admin, or similar. • Experience with Kafka, RabbitMQ, event-driven architecture. • Exposure to n8n, OpenFaaS, or AI agents. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

P-995 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. And we're only getting started. As one of the first Engineering Managers in the Software Engineering team at Databricks India , you will work with your team to build infrastructure and products for the Databricks platform at scale . We have multiple teams working on different domains. Resource management infrastructure powering the big data and machine learning workloads on the Databricks platform in a scalable, secure, and cloud-agnostic way Develop reliable, scalable services and client libraries that work with massive amounts of data on the cloud, across geographic regions and Cloud providers Build tools to allow Databricks engineers to operate their services across different clouds and environments Build services, products and infrastructure at the intersection of machine learning and distributed systems. The Impact You Will Have Hire great engineers to build an outstanding team. Support engineers in their career development by providing clear feedback and develop engineering leaders. Ensure high technical standards by instituting processes (architecture reviews, testing) and culture (engineering excellence). Work with engineering and product leadership to build a long-term roadmap. Coordinate execution and collaborate across teams to unblock cross-cutting projects. What We Look For 12+ years of extensive experience with large-scale distributed systems alongside the processes around testing, monitoring, SLAs etc Extensive experience as a Software Engineering Leader , building & scaling software engineering teams from ground up Extensive experience managing a team of strong software engineers Partner with PM, Sales, and Customers to develop innovative features & products. BS (or higher) in Computer Science, or a related field. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

GAQ226R260 As a University Recruiter at Databricks India, you will be an integral part of the early career team, supporting our Intern and New Grad hiring efforts as well as our intern and campus programs. You will focus on full cycle recruiting for technical candidates, campus engagement and events, and supporting our interns through the intern program. This will be the first ever University Recruiter across Databricks APAC ! We’re looking for someone who’s excited to build an early-in-career intern & campus hiring programs from scratch, and find the best ways to recruit and support a diverse group of early career candidates ! The Impact You Will Have Full-cycle recruiting from source to close for technical intern and new grad candidates Partner closely & build strong relationships with various universities Work 1:1 with interns and intern managers, coaching and supporting through the intern program Partner with business leaders to assist with team matching and allocation Manage high volume while maintaining excellent candidate experience and data integrity Manage intern and campus recruiting partnerships and events Build campus & intern programs from scratch ! What We Look For 10+ years of industry experience Extensive industry experience partnering & collaborating with a wide array of universities while hiring early-career tech talent A strong track record of sourcing, screening, and closing early in career technical candidates while providing top-tier candidate experience Program and/or event management is a plus ! A focus on efficient and effective recruiting processes that support a diverse candidate pool Experience working with multiple stakeholders (internal & external) About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Odisha, India

Remote

Linkedin logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Description: We are looking for a Data Scientist with expertise in Python, Azure Cloud, NLP, Forecasting, and large-scale data processing. The role involves enhancing existing ML models, optimising embeddings, LDA models, RAG architectures, and forecasting models, and migrating data pipelines to Azure Databricks for scalability and efficiency. Key Responsibilities: Model Development Model Development & Optimisation Train and optimise models for new data providers, ensuring seamless integration. Enhance models for dynamic input handling. Improve LDA model performance to handle a higher number of clusters efficiently. Optimise RAG (Retrieval-Augmented Generation) architecture to enhance recommendation accuracy for large datasets. Upgrade Retrieval QA architecture for improved chatbot performance on large datasets. Forecasting & Time Series Modelling Develop and optimise forecasting models for marketing, demand prediction, and trend analysis. Implement time series models (e.g., ARIMA, Prophet, LSTMs) to improve business decision-making. Integrate NLP-based forecasting, leveraging customer sentiment and external data sources (e.g., news, social media). Data Pipeline & Cloud Migration Migrate the existing pipeline from Azure Synapse to Azure Databricks and retrain models accordingly - Note: this is required only for the AUB role(s) Address space and time complexity issues in embedding storage and retrieval on Azure Blob Storage. Optimise embedding storage and retrieval in Azure Blob Storage for better efficiency. MLOps & Deployment Implement MLOps best practices for model deployment on Azure ML, Azure Kubernetes Service (AKS), and Azure Functions. Automate model training, inference pipelines, and API deployments using Azure services. Experience: Experience in Data Science, Machine Learning, Deep Learning and Gen AI. Design, Architect and Execute end to end Data Science pipelines which includes Data extraction, data preprocessing, Feature engineering, Model building, tuning and Deployment. Experience in leading a team and responsible for project delivery. Experience in Building end to end machine learning pipelines with expertise in developing CI/CD pipelines using Azure Synapse pipelines, Databricks, Google Vertex AI and AWS. Experience in developing advanced natural language processing (NLP) systems, specializing in building RAG (Retrieval-Augmented Generation) models using Langchain. Deploy RAG models to production. Have expertise in building Machine learning pipelines and deploy various models like Forecasting models, Anomaly Detection models, Market Mix Models, Classification models, Regression models and Clustering Techniques. Maintaining Github repositories and cloud computing resources for effective and efficient version control, development, testing and production. Developing proof-of-concept solutions and assisting in rolling these out to our clients. Required Skills & Qualifications: Hands-on experience with Azure Databricks, Azure ML, Azure Synapse, Azure Blob Storage, and Azure Kubernetes Service (AKS). Experience with forecasting models, time series analysis, and predictive analytics. Proficiency in Python (NumPy, Pandas, TensorFlow, PyTorch, Statsmodels, Scikit-learn, Hugging Face, FAISS). Experience with model deployment, API optimisation, and serverless architectures. Hands-on experience with Docker, Kubernetes, and MLflow for tracking and scaling ML models. Expertise in optimising time complexity, memory efficiency, and scalability of ML models in a cloud environment. Experience with Langchain or equivalent and RAG and multi-agentic generation Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: MLOps Engineer (Remote) Experience: 5+ Years Location: Remote Type: Full-time About the Role: We are seeking an experienced MLOps Engineer to design, implement, and maintain scalable machine learning infrastructure and deployment pipelines. You will work closely with Data Scientists and DevOps teams to operationalize ML models, optimize performance, and ensure seamless CI/CD workflows in cloud environments (Azure ML/AWS/GCP). Key Responsibilities: ✔ ML Model Deployment: Containerize ML models using Docker and deploy on Kubernetes Build end-to-end ML deployment pipelines for TensorFlow/PyTorch models Integrate with Azure ML (or AWS SageMaker/GCP Vertex AI) ✔ CI/CD & Automation: Implement GitLab CI/CD pipelines for automated testing and deployment Manage version control using Git and enforce best practices ✔ Monitoring & Performance: Set up Prometheus + Grafana dashboards for model performance tracking Configure alerting systems for model drift, latency, and errors Optimize infrastructure for scalability and cost-efficiency ✔ Collaboration: Work with Data Scientists to productionize prototypes Document architecture and mentor junior engineers Skills & Qualifications: Must-Have: 5+ years in MLOps/DevOps, with 6+ years total experience Expertise in Docker, Kubernetes, CI/CD (GitLab CI/CD), Linux Strong Python scripting for automation (PySpark a plus) Hands-on with Azure ML (or AWS/GCP) for model deployment Experience with ML model monitoring (Prometheus, Grafana, ELK Stack) Nice-to-Have: Knowledge of MLflow, Kubeflow, or TF Serving Familiarity with NVIDIA Triton Inference Server Understanding of data pipelines (Airflow, Kafka) Why Join Us? 💻 100% Remote with flexible hours 🚀 Work on cutting-edge ML systems at scale 📈 Competitive salary + growth opportunities Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% NLP Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Natural Language Processing Engineers with experience in text analytics, LLMs, and speech processing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Develop and optimize NLP models (NER, summarization, sentiment analysis) using transformer architectures (BERT, GPT, T5, LLaMA). Build scalable NLP pipelines for real-time and batch processing of large text data and optimize models for performance and deploy on cloud platforms (AWS, GCP, Azure). Implement CI/CD pipelines for automated training, deployment, and monitoring & integrate NLP models with search engines, recommendation systems, and RAG techniques. Ensure ethical AI practices and mentor junior engineers. Required Skills: Expert Python skills with NLP libraries (Hugging Face, SpaCy, NLTK). Experience with transformer-based models (BERT, GPT, T5) and deploying at scale (Flask, Kubernetes, cloud services). Strong knowledge of model optimization, data pipelines (Spark, Dask), and vector databases. Familiar with MLOps, CI/CD (MLflow, DVC), cloud platforms, and data privacy regulations. Nice to Have: Experience with multimodal AI, conversational AI (Rasa, OpenAI API), graph-based NLP, knowledge graphs, and A/B testing for model improvement. Contributions to open-source NLP projects or a strong publication record. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less

Posted 2 weeks ago

Apply

20.0 - 25.0 years

22 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

Position: Senior AI Architect AI Factory (MLOps, GenOps)Experience:20+ years of total IT experience with a minimum of 10 years in AI/ML Proven experience in building scalable AI platforms or "AI Factories" for productionizing machine learning and generative AI workflows, including strong hands-on expertise in MLOps and emerging GenOps practices Location:Bangalore / Pune on case-to-case basisRole Summary:We are looking for a Senior AI Architect to lead the design and implementation of a next-generation AI Factory platform that streamlines the development, deployment, monitoring, and reuse of AI/ML and GenAI assets This role will be instrumental in establishing scalable MLOps and GenOps practices, building reusable components, standardizing pipelines, and enabling cross-industry solutioning for pre-sales and delivery The candidate will work closely with the AI Practice Head, contributing to both business enablement and technical strategy while supporting customer engagements, RFP/RFI responses, PoCs, and accelerator development Key Responsibilities: Architect and build the AI Factory a central repository of reusable AI/ML models, GenAI prompts, agents, pipelines, APIs, and accelerators Define and implement MLOps workflows for versioning, model training, deployment, CI/CD, monitoring, and governance Design and integrate GenOps pipelines for prompt engineering, LLM orchestration, evaluation, and optimization Create blueprints and templates for standardized AI solution delivery across cloud platforms (Azure, AWS, GCP) Build accelerators and reusable modules to speed up AI solutioning for common use cases (e g , chatbots, summarization, document intelligence) Enable pre-sales and solution teams with reusable assets for demos, PoCs, and customer presentations Contribute to RFP/RFI responses with scalable, production-ready AI factory strategies and architectural documentation Collaborate with data engineering, DevOps, cloud, and security teams to ensure robust and enterprise-compliant AI solution delivery Required Skills: Deep experience in MLOps tools like MLflow, Kubeflow, SageMaker Pipelines, Azure ML Pipelines, or Vertex AI Pipelines Understanding of GenOps frameworks including prompt flow management, LLM evaluation (e g , TruLens, Ragas), and orchestration (LangChain, LlamaIndex, Semantic Kernel) Strong command of Python, YAML/JSON, and API integration for scalable AI component development Experience with CI/CD pipelines (GitHub Actions, Jenkins, Azure DevOps), containerization (Docker, Kubernetes), and model registries Familiar with model observability, drift detection, automated retraining, and model versioning Ability to create clean, reusable architecture artifacts and professional PowerPoint decks for customer and internal presentations Preferred Qualifications: Experience in building and managing an enterprise-wide AI marketplace or model catalog Familiarity with LLMOps platforms (eg, Weights & Biases, PromptLayer, Arize AI) Exposure to multi-cloud GenAI architectures and hybrid deployment models Cloud certifications in AI/ML from any major provider (AWS, Azure, GCP) Soft Skills: Strong leadership and mentoring capabilities Effective communication and storytelling skills for technical and non-technical audiences Innovation mindset with a passion for automation and efficiency Comfortable working in a fast-paced, cross-functional environment with shifting priorities

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job description Job role - AI/ML Technical Architect Experience – More that 12 years of experience Location – Noida Mode of Work - Hybrid Shift - 2PM to 10 PM Key Responsibilities: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript About Company: Pattem Group is a conglomerate holding company headquartered in Bangalore, India. Our companies under the umbrella of Pattem Group. We represent the essence of software product development, catering to global Fortune 500 companies and innovative startups. We are seeking an HR Executive with hands-on experience and a strong focus on execution to handle critical HR functions. The role requires practical involvement in various HR areas, ensuring the smooth and effective operation of HR practices across the organization. Show more Show less

Posted 2 weeks ago

Apply

Exploring mlflow Jobs in India

The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for mlflow professionals.

Average Salary Range

The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Salaries may vary based on factors such as location, company size, and specific job requirements.

Career Path

A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager

With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.

Related Skills

In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)

Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.

Interview Questions

  • What is mlflow and how does it help in the machine learning lifecycle? (basic)
  • Explain the difference between tracking, projects, and models in mlflow. (medium)
  • How do you deploy a machine learning model using mlflow? (medium)
  • Can you explain the concept of model registry in mlflow? (advanced)
  • What are the benefits of using mlflow in a machine learning project? (basic)
  • How do you manage experiments in mlflow? (medium)
  • What are some common challenges faced when using mlflow in a production environment? (advanced)
  • How can you scale mlflow for large-scale machine learning projects? (advanced)
  • Explain the concept of artifact storage in mlflow. (medium)
  • How do you compare different machine learning models using mlflow? (medium)
  • Describe a project where you successfully used mlflow to streamline the machine learning process. (advanced)
  • What are some best practices for versioning machine learning models in mlflow? (advanced)
  • How does mlflow support hyperparameter tuning in machine learning models? (medium)
  • Can you explain the role of mlflow tracking server in a machine learning project? (medium)
  • What are some limitations of mlflow that you have encountered in your projects? (advanced)
  • How do you ensure reproducibility in machine learning experiments using mlflow? (medium)
  • Describe a situation where you had to troubleshoot an issue with mlflow and how you resolved it. (advanced)
  • How do you manage dependencies in a mlflow project? (medium)
  • What are some key metrics to track when using mlflow for machine learning experiments? (medium)
  • Explain the concept of model serving in the context of mlflow. (advanced)
  • How do you handle data drift in machine learning models deployed using mlflow? (advanced)
  • What are some security considerations to keep in mind when using mlflow in a production environment? (advanced)
  • How do you integrate mlflow with other tools in the machine learning ecosystem? (medium)
  • Describe a situation where you had to optimize a machine learning model using mlflow. (advanced)

Closing Remark

As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies