Home
Jobs

705 Mlflow Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

AI Software Engineer Job Summary: We are seeking a highly skilled and innovative AI Solutions Specialist to design, develop, and deploy AI-driven solutions that address complex business problems. The ideal candidate will work closely with cross-functional teams to understand business requirements, evaluate AI technologies, and implement end-to-end intelligent systems using machine learning, deep learning, and other AI techniques. Key Responsibilities: Lead the evolution of the Data Engineering, Machine Learning and AI capabilities through the solution lifecycle Collaborate with project teams, data science teams and other development teams to drive the technical roadmap and guide development and implementation of new data driven business solutions. Collaborate with stakeholders to identify opportunities for AI and ML applications. Design scalable AI solutions that integrate with existing infrastructure and processes. Develop, train, and optimize machine learning models and AI algorithms. Evaluate third-party AI tools and APIs for integration where applicable. Create technical specifications, architecture documents, and proof-of-concept prototypes. Work with data engineering teams to ensure data quality, accessibility, and readiness. Monitor performance of AI models in production and iterate for improvement. Ensure ethical and responsible AI practices, including explainability, bias mitigation, and privacy. Present solutions and progress to technical and non-technical stakeholders. Stay updated with AI research and industry trends to propose innovative solutions. Strong knowledge of machine learning frameworks such as TensorFlow, PyTorch, Scikit-learn. Proficiency in Python and experience with AI development libraries and tools. Understanding of cloud AI/ML services (e.g., AWS SageMaker, Azure ML, Google Vertex AI AWS, Databricks, Snowflake, Python, Pyspark, Docker, Kubernetes, Terraform, Ansible, Prometheus, Grafana, ELK, Hadoop, Spark, Kafka, Elastic Search, SQL, NoSQL databases, Postgres, Cassandra, Salesforce). Experience with data preprocessing, feature engineering, and model evaluation techniques. Excellent problem-solving, communication, and stakeholder management skills. Experience with generative AI (e.g., GPT, diffusion models) or LLMs. Familiarity with MLOps practices and deployment pipelines (Docker, CI/CD, MLflow). Background in natural language processing (NLP), computer vision, or reinforcement learning. Publications, patents, or open-source contributions in AI/ML are a plus. Deep understanding of Junos OS architecture, features, and operational nuances. Experience: Minimum 6–8 years in data and analytics with expertise across AI, ML, data platforms, BI tools, and data engineering Experience with leading and architecting and building infrastructure to manage the Data/AI model lifecycle Deep understanding of technology trends, architectures and integrations related to Generative AI Hands-on experience with advanced analytics, predictive modelling, NLP, information retrieval, deep learning etc Communication Skills: Excellent verbal and written communication skills, with the ability to explain technical concepts clearly and concisely. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The role involves working with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Develop and maintain robust Python-based backend services and RESTful APIs to support machine learning models in production. Deploy and manage containerized applications using Docker and orchestrate them using Azure Kubernetes Service (AKS). Implement and manage ML pipelines using MLflow for model tracking, reproducibility, and deployment. Design, schedule, and maintain automated workflows using Apache Airflow to orchestrate data and ML pipelines. Collaborate with Data Scientists to productize NLP models, with a focus on language models, embeddings, and text preprocessing techniques (e.g., tokenization, lemmatization, vectorization). Ensure high code quality and version control using Git; manage CI/CD pipelines for reliable deployment. Handle unstructured text data and build scalable backend infrastructure for inference and retraining workflows. Participate in system design and architecture reviews for scalable and maintainable machine learning services. Proactively monitor, debug, and optimize ML applications in production environments. Communicate technical solutions and project status clearly to team leads and product stakeholders. Qualifications Minimum Experience (relevant): 5 years Maximum Experience (relevant): 9 years Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Required Skills Proficiency in Python and frameworks like FastAPI or Flask for building APIs. Solid hands-on experience with Docker, Kubernetes (AKS), and deploying production-grade applications. Familiarity with MLflow, including model packaging, logging, and deployment. Experience with Apache Airflow for orchestrating ETL and ML workflows. Understanding of NLP pipelines, language models (e.g., BERT, GPT variants), and associated libraries (e.g., spaCy, Hugging Face Transformers). Exposure to cloud environments, preferably Azure. Strong debugging, testing, and optimization skills for scalable systems. Experience working with large datasets and unstructured data, especially text. Preferred Skills Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable Azure based data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Understanding of Node.js is a plus, but not required. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Overview of Job Role: We are looking for a skilled and motivated DevOps Engineer to join our growing team. The ideal candidate will have expertise in AWS, CI/CD pipelines, and Terraform, with a passion for building and optimizing scalable, reliable, and secure infrastructure. This role involves close collaboration with development, QA, and operations teams to streamline deployment processes and enhance system performance. Roles & Responsibilities: Leadership & Strategy Lead and mentor a team of DevOps engineers, fostering a culture of automation, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business objectives to enhance scalability, security, and reliability. Collaborate with cross-functional teams, including software engineering, security, MLOps, and infrastructure teams, to drive DevOps best practices. Establish KPIs and performance metrics for DevOps operations, ensuring optimal system performance, cost efficiency, and high availability. Advocate for CPU throttling, auto-scaling, and workload optimization strategies to improve system efficiency and reduce costs. Drive MLOps adoption , integrating machine learning workflows into CI/CD pipelines and cloud infrastructure. Ensure compliance with ISO 27001 standards , implementing security controls and risk management measures. Infrastructure & Automation Oversee the design, implementation, and management of scalable, secure, and resilient infrastructure on AWS . Lead the adoption of Infrastructure as Code (IaC) using Terraform, CloudFormation, and configuration management tools like Ansible or Chef. Spearhead automation efforts for infrastructure provisioning, deployment, and monitoring to reduce manual overhead and improve efficiency. Ensure high availability and disaster recovery strategies, leveraging multi-region architectures and failover mechanisms. Manage Kubernetes (or AWS ECS/EKS) clusters , optimizing container orchestration for large-scale applications. Drive cost optimization initiatives , implementing intelligent cloud resource allocation strategies. CI/CD & Observability Architect and oversee CI/CD pipelines , ensuring seamless automation of application builds, testing, and deployments. Enhance observability and monitoring by implementing tools like CloudWatch, Prometheus, Grafana, ELK Stack, or Datadog. Develop robust logging, alerting, and anomaly detection mechanisms to ensure proactive issue resolution. Security & Compliance (ISO 27001 Implementation) Lead the implementation and enforcement of ISO 27001 security standards , ensuring compliance with information security policies and regulatory requirements. Develop and maintain an Information Security Management System (ISMS) to align with ISO 27001 guidelines. Implement secure access controls, encryption, IAM policies, and network security measures to safeguard infrastructure. Conduct risk assessments, vulnerability management, and security audits to identify and mitigate threats. Ensure security best practices are embedded into all DevOps workflows, following DevSecOps principles . Work closely with auditors and compliance teams to maintain SOC2, GDPR, and other regulatory frameworks . Required Skills and Qualifications: 5+ years of experience in DevOps, cloud infrastructure, and automation, with at least 3+ years in a managerial or leadership role . Proven experience managing AWS cloud infrastructure at scale, including EC2, S3, RDS, Lambda, VPC, IAM, and CloudFormation. Expertise in Terraform and Infrastructure as Code (IaC) principles . Strong background in CI/CD pipeline automation with tools like Jenkins, GitHub Actions, GitLab CI, or CircleCI. Hands-on experience with Docker and Kubernetes (or AWS ECS/EKS) for container orchestration. Experience in CPU throttling, auto-scaling, and performance optimization for cloud-based applications. Strong knowledge of Linux/Unix systems, shell scripting, and network configurations . Proven experience with ISO 27001 implementation , ISMS development, and security risk management. Familiarity with MLOps frameworks like Kubeflow, MLflow, or SageMaker, and integrating ML pipelines into DevOps workflows. Deep understanding of observability tools such as ELK Stack, Grafana, Prometheus, or Datadog. Strong stakeholder management, communication, and ability to collaborate across teams. Experience in regulatory compliance, including SOC2, ISO 27001, and GDPR . Show more Show less

Posted 2 weeks ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

HPE is seeking Data Engineer with strong experience in machine learning workflows to build and optimize scalable data systems. You'll work closely with data scientists and data engineers to power ML-driven solutions. Responsibilities: Collaborate closely with Machine Learning (ML) teams to deploy and monitor models in production, ensuring optimal performance and reliability. Design and implement experiments, and apply statistical analysis to validate model solutions and results. Lead efforts in ensuring high-quality data, proper governance practices, and excellent system performance in complex data architectures. Develop, maintain, and scale data pipelines, enabling machine learning and analytical models to function efficiently. Monitor and troubleshoot issues within data systems, resolving performance bottlenecks and implementing best practices. Required Skills: 56 years of data engineering experience, with a proven track record in building scalable data systems. Proficiency in SQL & NoSQL databases, Python, and distributed processing technologies such as Apache Spark. Strong understanding of data warehousing concepts, data modelling, and architecture principles. Expertise in cloud platforms (AWS, GCP, Azure) and managing cloud-based data systems would be an added advantage Hands-on experience building and maintaining machine learning pipelines and utilizing tools like MLflow, Kubeflow, or similar frameworks. Experience with search, recommendation engines, or NLP (Natural Language Processing) technologies. Solid foundation in statistics and experimental design, particularly in relation to machine learning systems. Strong problem-solving skills and ability to work independently and in a team-oriented environment.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Develop and deploy ML pipelines using MLOps tools, build FastAPI-based APIs, support LLMOps and real-time inferencing, collaborate with DS/DevOps teams, ensure performance and CI/CD compliance in AI infrastructure projects. Required Candidate profile Experienced Python developer with 4–8 years in MLOps, FastAPI, and AI/ML system deployment. Exposure to LLMOps, GenAI models, containerized environments, and strong collaboration across ML lifecycle

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Tamil Nadu, India

Remote

Linkedin logo

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. About Us Athenahealth is a leading provider of cloud-based services for healthcare systems, dedicated to transforming the way healthcare is delivered and managed. Founded in 1997, the company focuses on providing innovative solutions that enhance the efficiency and effectiveness of healthcare providers, enabling them to deliver better patient care. Athenahealth is a forward-thinking organization dedicated to leveraging cutting-edge technology to drive innovation and deliver exceptional solutions. We are seeking a talented Lead AI/ML Engineer to join our dynamic team and lead the development of advanced machine learning models and AI solutions. Job Summary As a AI/ML Engineer, you will be responsible for designing, developing, and implementing machine learning algorithms and AI solutions that address complex business challenges. You will lead a team of engineers collaborating with cross-functional teams to drive the successful deployment of AI initiatives. Your expertise will be crucial in shaping our AI strategy and ensuring the delivery of high-quality, scalable solutions. Key Responsibilities Lead the design and development of machine learning models and AI solutions to solve business problems. Select appropriate machine learning or deep learning models based on problem context and data availability. Develop, train, test, and validate models using state-of-the-art methodologies and frameworks. Collaborate with analytics, developers and domain experts to acquire, clean, and preprocess large datasets. Engineer features and performs exploratory data analysis to ensure data quality and model readiness. Containerize models (Docker/Kubernetes), implement monitoring (Prometheus/Grafana), and automate pipelines (MLflow/Kubeflow). Implement models into production environments, ensuring robustness, scalability, and maintainability. Develop and maintain CI/CD pipelines for seamless model integration and deployment. Monitor and evaluate model performance post-deployment and iterate based on feedback and performance metrics. Document model architecture, development processes, and performance evaluations thoroughly. Share insights and technical know-how with team members to foster a culture of continuous learning and improvement. Research & Innovation: Stay ahead of AI trends (LLMs, generative AI) and advocate for ethical AI practices. Analyze large datasets to extract insights and improve model performance. Ensure compliance with data privacy and security regulations in all AI/ML initiatives. Qualifications Bachelor’s or master’s degree in computer science, Data Science, Machine Learning, or a related field. Proven experience (3+ years) in AI/ML engineering, with a strong portfolio of successful projects. Proficiency in programming languages such as Python, R, or Java, and experience with ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Strong understanding of machine learning algorithms, statistical modeling, and data analysis techniques. Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Details Job Description: Join Altera, a leader in programmable logic technology, as we strive to become the #1 FPGA company. We are looking for a skilled Jr. Data Scientist to develop and deploy production-grade ML pipelines and infrastructure across the enterprise. This is a highly technical, hands-on role focused on building scalable, secure, and maintainable machine learning solutions within the Azure ecosystem. As a member of our Data & Analytics team, you will work closely with other data scientists, ML specialists, and engineering teams to operationalize ML models using modern tooling such as Azure Machine Learning, Dataiku, and Kubeflow, etc. You’ll drive MLOps practices, automate workflows, and help build a foundation for responsible and reliable AI delivery. Responsibilities Design, build, and maintain automated ML pipelines from data ingestion through model training, validation, deployment, and monitoring using Azure Machine Learning, Kubeflow, and related tools. Deploy and manage machine learning models in production environments using cloud-native technologies like AKS (Azure Kubernetes Service), Azure Functions, and containerized environments. Partner with data scientists to transform experimental models into robust, production-ready systems, ensuring scalability and performance. Drive best practices for model versioning, CI/CD, testing, monitoring, and drift detection using Azure DevOps, Git, and third-party tools. Work with large-scale datasets from enterprise sources using Azure Synapse Analytics, Azure Data Factory, Azure Data Lake, etc. Build integrations with platforms like Dataiku etc. to support collaborative workflows and low-code user interactions while ensuring underlying infrastructure is robust and auditable. Set up monitoring pipelines to track model performance, ensure availability, manage retraining schedules, and respond to production issues. Write clean, modular code with clear documentation, tests, and reusable components for ML workflows. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of hands-on experience developing and deploying machine learning models in production environments. Strong programming skills in Python, with experience in ML libraries such as scikit-learn, TensorFlow, PyTorch, or XGBoost. Proven experience with the Microsoft Azure ecosystem, especially: Azure Machine Learning (AutoML, ML Designer, SDK) Azure Synapse Analytics and Data Factory Azure Data Lake, Azure Databricks Azure OpenAI and Cognitive Services Experience with MLOps frameworks such as Kubeflow, MLflow, or Azure ML pipelines. Familiarity with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins for model lifecycle automation. Experience working with APIs, batch and real-time data pipelines, and cloud security practices. Why Join Us? Build and scale real-world ML systems on a modern Azure-based platform. Help shape the AI and ML engineering foundation of a forward-looking organization. Work cross-functionally with experts in data science, software engineering, and operations. Enjoy a collaborative, high-impact environment where innovation is valued and supported. Job Type Regular Shift Shift 1 (India) Primary Location: Ecospace 1 Additional Locations: Posting Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Zeta Global is looking for an experienced Machine Learning Engineer with industry-proven hands-on experience of delivering machine learning models to production to solve business problems. To be a good fit to join our AI/ML team, you should ideally: Be a thought leader that can work with cross-functional partners to foster a data-driven organisation. Be a strong team player, have experience contributing to a large project as part of a collaborative team effort. Have extensive knowledge and expertise with machine learning engineering best-practices and industry standards. Empower the product and engineering teams to make data-driven decisions. What you need to succeed: 2 to 4 years of proven experience as a Machine Learning Engineer in a professional setting. Proficiency in any programming language (Python preferable). Prior experience in building and deploying Machine learning systems. Experience with containerization: Docker & Kubernetes. Experience with AWS cloud services like EKS, ECS, EMR, Lambda, and others. Fluency with workflow management tools like Airflow or dbt. Familiarity with distributed batch compute technologies such as Spark. Experience with modern data warehouses like Snowflake or BigQuery. Knowledge of MLFlow, Feast, and Terraform is a plus.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. We are seeking an innovative and highly skilled Lead Generative AI (GenAI) Engineer to spearhead the design, development, and deployment of advanced AI-powered solutions. In this role, you will lead a team of engineers and data scientists to harness cutting-edge Generative AI technologies and implement them to solve complex business problems, enhance user experiences, and drive innovation. This role combines deep technical expertise, leadership, and a strong understanding of AI trends and tools. Key Responsibilities Technical Leadership: Lead the end-to-end design and implementation of Generative AI solutions. Provide technical guidance and mentorship to engineers and data scientists working on GenAI projects. Stay updated with the latest trends, research, and advancements in Generative AI and Large Language Models (LLMs). Solution Development: Architect, train, and fine-tune state-of-the-art LLMs and generative AI models Develop and optimize pipelines for prompt engineering, retrieval-augmented generation (RAG), and domain-specific fine-tuning. Develop and deploy generative AI models, particularly focusing on ChatGPT, using Python on Azure or AWS Platform or .Net on Azure platform Ensure scalability, performance, and security of AI solutions deployed in production. Integration and Deployment: API Development: Ability to define and deliver API access for GenAI services, facilitating integration with other systems and applications. Collaborate with software engineering teams to integrate GenAI solutions into enterprise applications and services. Utilize cloud platforms (e.g., Azure, AWS, or GCP) to deploy and manage AI models and APIs. Leverage MLOps practices for continuous model monitoring, retraining, and improvement. Data Strategy and Preparation: Collaborate with data engineering teams to ensure high-quality data acquisition, preprocessing, and augmentation for model training and fine-tuning. Implement data governance and privacy practices in line with organizational policies. Innovation and Research: Experiment with new generative AI techniques, such as multimodal AI, reinforcement learning with human feedback (RLHF), and active learning. Evaluate and recommend AI frameworks, libraries, and platforms for project requirements. Stakeholder Collaboration: Work closely with product managers, business stakeholders, and UX designers to define AI-powered product features and use cases. Present technical concepts, project progress, and AI capabilities to non-technical audiences. Key Requirements Technical Skills: Hands-on experience with cloud platforms and services for AI/ML, such as Azure AI Services, Azure Machine Learning, AWS Bedrock, or Google Vertex AI. Hands on experience in any of LLMs such as OpenAI’s ChatGPT Models , Gemini, Llama 2 ,Claude 2 ,Grok Hands on experience in any of the agentic frameworks like LangChain, Semantic kernel, AutoGen, CrewAi Hands on experience using any of vector database like Chroma, Pinecone, Weaviate, Faiss Experience with multimodal AI and advanced techniques like Tree-of-Thoughts, Retrieval-Augmented Generation (RAG), or Reinforcement Learning with Human Feedback (RLHF) Strong expertise in LLMs and generative AI frameworks like OpenAI, Hugging Face Transformers, or similar platforms. Deep understanding of natural language processing (NLP) concepts, including tokenization, embeddings, and sequence-to-sequence models. Proficiency in Python and libraries such as TensorFlow, PyTorch, and Scikit-learn. Experience in CI/CD pipeline management and automation tools, particularly within the Azure DevOps environment. Knowledge of containerization (e.g., Docker) and orchestration tools is also important Familiarity with MLOps tools and practices, such as MLflow, Kubeflow, or Docker. Qualifications Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field (Ph.D. preferred). 8+ years of experience in AI/ML engineering, with 2+ years specifically in Generative AI. Minimum of 2 years of experience in building Conversational AI applications using cloud-based services and in orchestrating AI/ML services for building a complete solution Minimum of 6 years of extensive full-time experience in Data Analysis, Statistics, Machine Learning, or Computer Science Proven track record of leading AI projects from inception to production. Experience with multimodal AI and advanced techniques like Tree-of-Thoughts, Retrieval-Augmented Generation (RAG), or Reinforcement Learning with Human Feedback (RLHF). Certifications in AI/ML or cloud platforms (e.g., Azure AI Engineer, AWS Certified Machine Learning). Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Role Title - Team Lead and Lead Developer – Backend and Database (Node) Role Type - Full time Role Reports to Chief Technology Officer Category - Regular / Fixed Term Job location - 8 th floor, E Block, IITM Research Park, Taramani Job Overview We're seeking an experienced Senior Backend and Database Developer and Team Lead for our backend team. The ideal candidate will combine technical expertise in full-stack development with extensive experience in backend development, with strong process optimization skills and innovative thinking to drive team efficiency and product quality. Job Specifications Educational Qualifications - Any UG/PG graduates Experience - 5+ years Key Job Responsibilities Software architecture design Architect and oversee development of backend in Node Familiarity with MVC and design patterns and have a strong grasp of data structures Basic database theory – ACID vs eventually consistent, OLTP vs OLAP Different types of databases - relational stores, K/V stores, text stores, graph DBs, vector DBs, time series DBs Database design & structures Experience with data modeling concepts including normalization, normal forms, star schema (management and evolution), and dimensional modeling Expertise in SQL DBs (MySQL, PostgreSQL), and NoSQL DBs (MongoDB, Redis) Data pipeline design based on operational principles. Dealing with failures, restarts, reruns, pipeline changes, and various file storage formats Backend & API frameworks & other services To develop and maintain RESTful, JSON RPC and other APIs for various applications Understanding of backend JS frameworks such as Express.js, NestJS , and documentation tools like Postman, Swagger Experience with callbacks with Webhooks, callbacks and other event-driven systems, and third-party solution integrations (Firebase, Google Maps, Amplify and others) QA and testing Automation testing and tooling knowledge for application functionality validation and QA Experience with testing routines and fixes with various testing tools (JMeter, Artillery or others) Load balancers, caching and serving Experience with event serving (Apache Kafka and others), caching and processing (Redux, Apache Spark or other frameworks) and scaling (Kubernetes and other systems) Experience with orchestrators like Airflow for huge data workloads, Scripting and automation for various purposes including scheduling and logging Production, Deployment & Monitoring Experience of CI/CD pipelines with tools like Jenkins/Circle CI, and Docker for containerization Experience in deployment and monitoring of apps on cloud platforms e.g., AWS, Azure and bare metal configurations Documentation, version control and ticketing Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Backend documentation and referencing with tools like Swagger, Postman Experience in creating ERDs for various data types and models and documentation of evolving models Behavioral competencies Attention to detail Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration Experience presenting to stakeholders and executive teams Ability to bridge technical and non-technical communication Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking Ability to develop and implement long-term goals and strategies aligned with the organization’s vision Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making Capable of making informed and effective decisions, considering both short-term and long-term impacts Insight into resource allocation and sprint building for various projects Team Building Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency Research and integrate AI-powered development tools (GitHub Copilot, Amazon CodeWhisperer) Added advantage points AI/ML applications Experience in AI/ML application backend workflows (e.g: MLFlow) and serving the models Data processing & maintenance Familiarity with at least one data processing platform (e.g., Spark, Flink, Beam/Google Dataflow, AWS Batch) Experience with Elasticsearch and other client-side data processing frameworks Understand data management and analytics – with metadata catalogs (e.g., AWS Glue), data warehousing (e.g., AWS Redshift) Data governance Quality control, policies around data duplication, definitions, company-wide processes around security and privacy Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh Senior HR Executive Email Id - careers@plenome.com Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We’re hiring a Senior ML Engineer (MLOps) — 3-5 yrs Location: Chennai What you’ll do Tame data → pull, clean, and shape structured & unstructured data. Orchestrate pipelines → Airflow / Step Functions / ADF… your call. Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI. Scale → Spark / Databricks for the heavy lifting. Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Pair up → work with engineers, architects, and business folks to solve real problems, fast. What you bring 3+ yrs hands-on MLOps (4-5 yrs total software experience). Proven chops on one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark , Python, SQL, TensorFlow / PyTorch / Scikit-learn. You debug Kubernetes in your sleep and treat Dockerfiles like breathing. You prototype with open-source first, choose the right tool, then make it scale. Sharp mind, low ego, bias for action. Nice-to-haves Sagemaker, Azure ML, or Vertex AI in production. Love for clean code, clear docs, and crisp PRs. Why Datadivr? Domain focus: we live and breathe F&B — your work ships to plants, not just slides. Small team, big autonomy: no endless layers; you own what you build. 📬 How to apply Shoot your CV + a short note on a project you shipped to careers@datadivr.com or DM me here. We reply to every serious applicant. Know someone perfect? Please share — good people know good people. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 - 0 Lacs

Panaji

On-site

GlassDoor logo

Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). Hands-On ML/AI Experience: Proven record of deploying, fine-tuning, or integrating large-scale NLP models or other advanced ML solutions. Programming & Frameworks: Strong proficiency in Python (PyTorch or TensorFlow) and familiarity with MLOps tools (e.g., Airflow, MLflow, Docker). Security & Compliance: Understanding of data privacy frameworks, encryption, and secure data handling practices, especially for sensitive internal documents. DevOps Knowledge: Comfortable setting up continuous integration/continuous delivery (CI/CD) pipelines, container orchestration (Kubernetes), and version control (Git). Collaborative Mindset: Experience working cross-functionally with technical and non-technical teams; ability to clearly communicate complex AI concepts. Role Overview Collaborate with cross-functional teams to build AI-driven applications for improved productivity and reporting. Lead integrations with hosted AI solutions (ChatGPT, Claude, Grok) for immediate functionality without transmitting sensitive data while laying the groundwork for a robust in-house AI infrastructure. Develop and maintain on-premises large language model (LLM) solutions (e.g. Llama) to ensure data privacy and secure intellectual property. Key Responsibilities LLM Pipeline Ownership: Set up, fine-tune, and deploy on-prem LLMs; manage data ingestion, cleaning, and maintenance for domain-specific knowledge bases. Data Governance & Security: Assist our IT department to implement role-based access controls, encryption protocols, and best practices to protect sensitive engineering data. Infrastructure & Tooling: Oversee hardware/server configurations (or cloud alternatives) for AI workloads; evaluate resource usage and optimize model performance. Software Development: Build and maintain internal AI-driven applications and services (e.g., automated report generation, advanced analytics, RAG interfaces, as well as custom desktop applications). Integration & Automation: Collaborate with project managers and domain experts to automate routine deliverables (reports, proposals, calculations) and speed up existing workflows. Best Practices & Documentation: Define coding standards, maintain technical documentation, and champion CI/CD and DevOps practices for AI software. Team Support & Training: Provide guidance to data analysts and junior developers on AI tool usage, ensuring alignment with internal policies and limiting model “hallucinations.” Performance Monitoring: Track AI system metrics (speed, accuracy, utilization) and implement updates or retraining as necessary. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 10/06/2025

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Peelamedu, Coimbatore, Tamil Nadu

On-site

Indeed logo

Job Title: Senior AI Developer – Computer Vision & LLMs Location: Coimbatore, Work from office role Experience Required: 4+ years Employment Type: Full-Time, Permanent Company: Katomaran Technologies About Us Katomaran Technologies is a cutting-edge technology company building real-world AI applications that span across computer vision, large language models (LLMs), and AI agentic systems. We are looking for a highly motivated Senior AI Developer who thrives at the intersection of technology, leadership, and innovation. You will play a core role in architecting AI products , leading engineering teams , and collaborating directly with the founding team and customers to turn vision into scalable, production-ready solutions. Key Responsibilities Architect and develop scalable AI solutions using computer vision, LLMs, and agent-based AI architectures. Collaborate with the founding team to define product roadmaps and AI strategy. Lead and mentor a team of AI and software engineers, ensuring high code quality and project delivery timelines. Develop robust, efficient pipelines for model training, validation, deployment, and real-time inference. Work closely with customers and internal stakeholders to translate requirements into AI-powered applications. Stay up to date with state-of-the-art research in vision models (YOLO, SAM, CLIP, etc.), transformers, and agentic systems (AutoGPT-style orchestration). Optimize AI models for deployment on cloud and edge environments. Required Skills & Qualifications Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related fields. 5+ years of hands-on experience building AI applications in computer vision and/or NLP. Strong knowledge of Deep Learning frameworks (PyTorch, TensorFlow, OpenCV, HuggingFace, etc.). Proven experience with LLM fine-tuning, prompt engineering , and embedding-based retrieval (RAG) . Solid understanding of agentic systems such as LangGraph, CrewAI, AutoGen, or custom orchestrators. Ability to design and manage production-grade AI systems (Docker, REST APIs, GPU optimization, etc.). Strong communication and leadership skills, with experience managing small to mid-size teams. Startup mindset – self-driven, ownership-oriented, and comfortable in ambiguity. Nice to Have Experience with video analytics platforms or edge deployment (Jetson, Coral, etc.). Experience with programming skills in C++ will be an added advantage Knowledge of MLOps practices and tools (MLflow, Weights & Biases, ClearML, etc.). Exposure to Reinforcement Learning or multi-agent collaboration models. Customer-facing experience or involvement in AI product strategy. What We Offer Medical insurance Paid sick time Paid time off PF To Apply: Send your resume, GitHub/portfolio, and a brief note about your most exciting AI project to hr@katomaran.com. Job Type: Full-time Pay: Up to ₹600,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Ability to commute/relocate: Peelamedu, Coimbatore, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: AI applications: 5 years (Required) LLM: 5 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 - 0 Lacs

Vadodara

Remote

GlassDoor logo

Vadodara, Gujarat, India Job Title: AI/ML Engineer Location: Job Type: Full-time Experience Level: 2-3 years Salary Range : 60-75k Job Summary: We are looking for a skilled and motivated AI/ML Engineer to join our team. The ideal candidate will design, develop, and implement machine learning models and AI-driven solutions to solve complex business problems. You will collaborate with cross-functional teams to bring scalable and innovative AI products to production. Key Responsibilities: Design, build, and deploy machine learning models and algorithms. Work with large datasets to extract meaningful patterns and insights. Collaborate with data engineers to ensure efficient data pipelines. Conduct experiments and perform model evaluation and optimization. Integrate ML models into production systems. Stay updated with the latest research and developments in AI/ML. Create documentation and communicate findings and models effectively. Requirements: Technical Skills: Proficiency in Python and ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data preprocessing, feature engineering, and model evaluation. Knowledge of deep learning, NLP, computer vision, or reinforcement learning (as per role focus). Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps tools. Experience with databases (SQL, NoSQL) and version control (Git). Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field. 2-3 years of experience in machine learning or AI roles. Soft Skills: Strong problem-solving and analytical skills. Ability to work independently and in a team environment. Excellent communication and documentation skills. Preferred Qualifications: Publications or contributions to open-source ML projects. Experience in deploying models using Docker/Kubernetes. Familiarity with ML lifecycle tools like MLflow, Kubeflow, or Airflow. What We Offer: Competitive salary and benefits. Opportunity to work on cutting-edge AI technologies. Flexible working hours and remote options. Supportive and innovative team environment. Why join us? As all our products are enterprise-grade solutions so you get to work with the latest technologies to compete with the world. Flexible working hours Work-life balance 5 Days Working Job Type: Full-time Location : Vadodara, Gujarat Qualification: Bachelor's degree or equivalent Mode of working: Remote [WFH]

Posted 2 weeks ago

Apply

2.0 - 3.0 years

0 - 0 Lacs

Vadodara

Remote

GlassDoor logo

AI/ML Engineer Vadodara, Gujarat, India Job Title: AI/ML Engineer Location: Job Type: Full-time Experience Level: 2-3 years Salary Range : 60-75k Job Summary: We are looking for a skilled and motivated AI/ML Engineer to join our team. The ideal candidate will design, develop, and implement machine learning models and AI-driven solutions to solve complex business problems. You will collaborate with cross-functional teams to bring scalable and innovative AI products to production. Key Responsibilities: Design, build, and deploy machine learning models and algorithms. Work with large datasets to extract meaningful patterns and insights. Collaborate with data engineers to ensure efficient data pipelines. Conduct experiments and perform model evaluation and optimization. Integrate ML models into production systems. Stay updated with the latest research and developments in AI/ML. Create documentation and communicate findings and models effectively. Requirements: Technical Skills: Proficiency in Python and ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data preprocessing, feature engineering, and model evaluation. Knowledge of deep learning, NLP, computer vision, or reinforcement learning (as per role focus). Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps tools. Experience with databases (SQL, NoSQL) and version control (Git). Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field. 2-3 years of experience in machine learning or AI roles. Soft Skills: Strong problem-solving and analytical skills. Ability to work independently and in a team environment. Excellent communication and documentation skills. Preferred Qualifications: Publications or contributions to open-source ML projects. Experience in deploying models using Docker/Kubernetes. Familiarity with ML lifecycle tools like MLflow, Kubeflow, or Airflow. What We Offer: Competitive salary and benefits. Opportunity to work on cutting-edge AI technologies. Flexible working hours and remote options. Supportive and innovative team environment. Why join us? As all our products are enterprise-grade solutions so you get to work with the latest technologies to compete with the world. Flexible working hours Work-life balance 5 Days Working Job Type: Full-time Location : Vadodara, Gujarat Qualification: Bachelor's degree or equivalent Mode of working: Remote [WFH]

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chhattisgarh, India

Remote

Linkedin logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

P-375 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. Databricks Mosaic AI offers a unique data-centric approach to building enterprise-quality, Machine Learning and Generative AI solutions, enabling organizations to securely and cost-effectively own and host ML and Generative AI models, augmented or trained with their enterprise data. And we're only getting started in Bengaluru , India - and currently in the process of setting up 10 new teams from scratch ! As a Senior Software Engineer at Databricks India, you can get to work across : Backend DDS (Distributed Data Systems) Full Stack The Impact You'll Have Our Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as: Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark™, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage. Our DDS team spans across: Apache Spark™ Data Plane Storage Delta Lake Delta Pipelines Performance Engineering As a Full Stack software engineer, you will work closely with your team and product management to bring that delight through great user experience. What We Look For BS (or higher) in Computer Science, or a related field 6+ years of production level experience in one of: Python, Java, Scala, C++, or similar language. Experience developing large-scale distributed systems from scratch Experience working on a SaaS platform or with Service-Oriented Architectures. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Andhra Pradesh, India

Remote

Linkedin logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Head - Python Engineering Job Summary: We are looking for a skilled Python, AI/ML Developer with 8 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

The Data Engineering team within the AI, Data, and Analytics (AIDA) organization is the backbone of our data-driven sales and marketing operations. We provide the essential foundation for transformative insights and data innovation. By focusing on integration, curation, quality, and data expertise across diverse sources, we power world-class solutions that advance Pfizer’s mission. Join us in shaping a data-driven organization that makes a meaningful global impact. Role Summary We are seeking a technically adept and experienced Data Solutions Engineering Manager with a passion for developing data products and innovative solutions to create competitive advantages for Pfizer’s commercial business units. This role requires a strong technical background to ensure effective collaboration with engineering and developer team members. As a Data Solutions Engineer in our data lake/data warehousing team, you will play a crucial role in building data pipelines and processes that support data transformation, workload management, data structures, dependencies, and metadata management. This role will need to be able to work closely with stakeholders to understand their needs and work alongside them to ensure data being ingested meets the business user's needs and will well modeled and organized to promote scalable usage and good data hygiene. Work with complex and advanced data environments, employ the right architecture to handle data, and support various analytics use cases including business reporting, production data pipeline, machine learning, optimization models, statistical models, and simulations. The Data Solutions Engineering Manager will ensure data quality and integrity by validating and cleansing data, identifying, and resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data-driven solutions for the pharmaceutical industry. Role Responsibilities Project solutioning, including scoping, and estimation. Data sourcing, investigation, and profiling. Prototyping and design thinking. Developing data pipelines & complex data workflows. Actively contribute to project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloging. Accountable for engineering development of both internal and external facing data solutions by conforming to EDSE and Digital technology standards. Partner with internal / external partners to design, build and deliver best in class data products globally to improve the quality of our customer analytics and insights and the growth of commercial in its role in helping patients. Demonstrate outstanding collaboration and operational excellence. Drive best practices and world-class product capabilities. Qualifications Bachelor’s degree in a technical area such as computer science, engineering or management information science. 5+ years of combined data warehouse/data lake experience as hands on data engineer. 5+ years in developing data product and data features in servicing analytics and AI use cases Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Technical Skillset 5+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. 5+ years of hands-on experience delivering data lake/data warehousing projects. Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Job Title: Data Engineer (Snowflake + dbt) Location: Hyderabad, India Job Type: Full-time Job Description We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the client’s organization. Key Responsibilities Design and build robust ELT pipelines using dbt on Snowflake, including ingestion from relational databases, APIs, cloud storage, and flat files. Reverse-engineer and optimize SAP Data Services (SAP DS) jobs to support scalable migration to cloud-based data platforms. Implement layered data architectures (e.g., staging, intermediate, mart layers) to enable reliable and reusable data assets. Enhance dbt/Snowflake workflows through performance optimization techniques such as clustering, partitioning, query profiling, and efficient SQL design. Use orchestration tools like Airflow, dbt Cloud, and Control-M to schedule, monitor, and manage data workflows. Apply modular SQL practices, testing, documentation, and Git-based CI/CD workflows for version-controlled, maintainable code. Collaborate with data analysts, scientists, and architects to gather requirements, document solutions, and deliver validated datasets. Contribute to internal knowledge sharing through reusable dbt components and participate in Agile ceremonies to support consulting delivery. Required Qualifications Data Engineering Skills 3–5 years of experience in data engineering, with hands-on experience in Snowflake and basic to intermediate proficiency in dbt. Capable of building and maintaining ELT pipelines using dbt and Snowflake with guidance on architecture and best practices. Understanding of ELT principles and foundational knowledge of data modeling techniques (preferably Kimball/Dimensional). Intermediate experience with SAP Data Services (SAP DS) , including extracting, transforming, and integrating data from legacy systems. Proficient in SQL for data transformation and basic performance tuning in Snowflake (e.g., clustering, partitioning, materializations). Familiar with workflow orchestration tools like dbt Cloud, Airflow, or Control M. Experience using Git for version control and exposure to CI/CD workflows in team environments. Exposure to cloud storage solutions such as Azure Data Lake, AWS S3, or GCS for ingestion and external staging in Snowflake. Working knowledge of Python for basic automation and data manipulation tasks. Understanding of Snowflake's role-based access control (RBAC), data security features, and general data privacy practices like GDPR. Data Quality & Documentation Familiar with dbt testing and documentation practices (e.g., dbt tests, dbt docs). Awareness of standard data validation and monitoring techniques for reliable pipeline development. Soft Skills & Collaboration Strong problem-solving skills and ability to debug SQL and transformation logic effectively. Able to document work clearly and communicate technical solutions to a cross-functional team. Experience working in Agile settings, participating in sprints, and handling shifting priorities. Comfortable collaborating with analysts, data scientists, and architects across onshore/offshore teams. High attention to detail, proactive attitude, and adaptability in dynamic project environments. Nice to Have Experience working in client-facing or consulting roles. Exposure to AI/ML data pipelines or tools like feature stores and MLflow. Familiarity with enterprise-grade data quality tools Education: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus. Why Join Us? Opportunity to work on diverse and challenging projects in a consulting environment. Collaborative work culture that values innovation and curiosity. Access to cutting-edge technologies and a focus on professional development. Competitive compensation and benefits package. Be part of a dynamic team delivering impactful data solutions. Why Join Us? Opportunity to work on diverse and challenging projects in a consulting environment. Collaborative work culture that values innovation and curiosity. Access to cutting-edge technologies and a focus on professional development. Competitive compensation and benefits package. Be part of a dynamic team delivering impactful data solutions. Show more Show less

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Panaji, Goa

On-site

Indeed logo

Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). Hands-On ML/AI Experience: Proven record of deploying, fine-tuning, or integrating large-scale NLP models or other advanced ML solutions. Programming & Frameworks: Strong proficiency in Python (PyTorch or TensorFlow) and familiarity with MLOps tools (e.g., Airflow, MLflow, Docker). Security & Compliance: Understanding of data privacy frameworks, encryption, and secure data handling practices, especially for sensitive internal documents. DevOps Knowledge: Comfortable setting up continuous integration/continuous delivery (CI/CD) pipelines, container orchestration (Kubernetes), and version control (Git). Collaborative Mindset: Experience working cross-functionally with technical and non-technical teams; ability to clearly communicate complex AI concepts. Role Overview Collaborate with cross-functional teams to build AI-driven applications for improved productivity and reporting. Lead integrations with hosted AI solutions (ChatGPT, Claude, Grok) for immediate functionality without transmitting sensitive data while laying the groundwork for a robust in-house AI infrastructure. Develop and maintain on-premises large language model (LLM) solutions (e.g. Llama) to ensure data privacy and secure intellectual property. Key Responsibilities LLM Pipeline Ownership: Set up, fine-tune, and deploy on-prem LLMs; manage data ingestion, cleaning, and maintenance for domain-specific knowledge bases. Data Governance & Security: Assist our IT department to implement role-based access controls, encryption protocols, and best practices to protect sensitive engineering data. Infrastructure & Tooling: Oversee hardware/server configurations (or cloud alternatives) for AI workloads; evaluate resource usage and optimize model performance. Software Development: Build and maintain internal AI-driven applications and services (e.g., automated report generation, advanced analytics, RAG interfaces, as well as custom desktop applications). Integration & Automation: Collaborate with project managers and domain experts to automate routine deliverables (reports, proposals, calculations) and speed up existing workflows. Best Practices & Documentation: Define coding standards, maintain technical documentation, and champion CI/CD and DevOps practices for AI software. Team Support & Training: Provide guidance to data analysts and junior developers on AI tool usage, ensuring alignment with internal policies and limiting model “hallucinations.” Performance Monitoring: Track AI system metrics (speed, accuracy, utilization) and implement updates or retraining as necessary. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 10/06/2025

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We specialize in delivering high-quality human-curated data and AI-first scaled operations services Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact Role Overview: We are looking for a Data Scientist to join and build intelligent, data-driven solutions for our client that enable impactful decisions This role requires contributions across the data science lifecycle from data wrangling and exploratory analysis to building and deploying machine learning models Whether youre just getting started or have years of experience, were looking for individuals who are curious, analytical, and driven to make a difference with data Responsibilities: Design, develop, and deploy machine learning models and analytical solutions Conduct exploratory data analysis and feature engineering Own or contribute to the end-to-end data science pipeline: data cleaning, modeling, validation, and deployment Collaborate with cross-functional teams (engineering, product, business) to define problems and deliver measurable impact Translate business challenges into data science problems and communicate findings clearly Implement A/B tests, statistical tests, and experimentation strategies Support model monitoring, versioning, and continuous improvement in production environments Evaluate new tools, frameworks, and best practices to improve model accuracy and scalability Required Skills: Strong programming skills in Python including libraries such as pandas, NumPy, scikit-learn, matplotlib, seaborn Proficient in SQL, comfortable querying large, complex datasets Sound understanding of statistics, machine learning algorithms, and data modeling Experience building end-to-end ML pipelines Exposure to or hands-on experience with model deployment tools like FastAPI, Flask, MLflow Experience with data visualization and insight communication Familiarity with version control tools (eg, Git) and collaborative workflows Ability to write clean, modular code and document processes clearly Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Familiarity with data engineering tools like Apache Spark, Kafka, Airflow, dbt Exposure to MLOps practices and managing models in production environments Working knowledge of cloud platforms like AWS, GCP, or Azure (e, SageMaker, BigQuery, Vertex AI) Experience designing and interpreting A/B tests or causal inference models Prior experience in high-growth startups or cross-functional leadership roles Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Mathematics, Engineering, or a related field Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,India

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Mandatory Skills ✅ Python – Minimum 4+ years of hands-on experience ✅ AI/ML – Minimum 5+ years of strong experience in designing and implementing machine learning models, algorithms, and AI-driven solutions ✅ SQL – Minimum 2+ years of experience working with large datasets and query optimization Key Responsibilities Lead the development of advanced AI/ML models for real-world applications Collaborate with data scientists, analysts, and software engineers to deploy end-to-end data-driven solutions Design scalable machine learning pipelines and automate model deployment Work on feature engineering, model tuning, and performance optimization Ensure best practices in AI/ML model governance, performance monitoring, and retraining Preferred Qualifications Experience with ML Ops tools (e.g., MLflow, Kubeflow) Strong knowledge of data preprocessing, feature extraction, and model interpretability Exposure to cloud platforms (AWS, Azure, or GCP) Familiarity with deep learning frameworks (TensorFlow, PyTorch, etc.) is a plus 💡 Work Mode: Flexible – Choose to work from our Cochin , Trivandrum office or fully remote ! 📅 Start ASAP! We’re only considering candidates with a notice period of 0–30 days. Skills: data science,deep learning frameworks,ai,python,feature extraction,ai/ml,sql,cloud platforms,model interpretability,data preprocessing,ml,ml ops tools Show more Show less

Posted 2 weeks ago

Apply

Exploring mlflow Jobs in India

The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for mlflow professionals.

Average Salary Range

The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Salaries may vary based on factors such as location, company size, and specific job requirements.

Career Path

A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager

With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.

Related Skills

In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)

Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.

Interview Questions

  • What is mlflow and how does it help in the machine learning lifecycle? (basic)
  • Explain the difference between tracking, projects, and models in mlflow. (medium)
  • How do you deploy a machine learning model using mlflow? (medium)
  • Can you explain the concept of model registry in mlflow? (advanced)
  • What are the benefits of using mlflow in a machine learning project? (basic)
  • How do you manage experiments in mlflow? (medium)
  • What are some common challenges faced when using mlflow in a production environment? (advanced)
  • How can you scale mlflow for large-scale machine learning projects? (advanced)
  • Explain the concept of artifact storage in mlflow. (medium)
  • How do you compare different machine learning models using mlflow? (medium)
  • Describe a project where you successfully used mlflow to streamline the machine learning process. (advanced)
  • What are some best practices for versioning machine learning models in mlflow? (advanced)
  • How does mlflow support hyperparameter tuning in machine learning models? (medium)
  • Can you explain the role of mlflow tracking server in a machine learning project? (medium)
  • What are some limitations of mlflow that you have encountered in your projects? (advanced)
  • How do you ensure reproducibility in machine learning experiments using mlflow? (medium)
  • Describe a situation where you had to troubleshoot an issue with mlflow and how you resolved it. (advanced)
  • How do you manage dependencies in a mlflow project? (medium)
  • What are some key metrics to track when using mlflow for machine learning experiments? (medium)
  • Explain the concept of model serving in the context of mlflow. (advanced)
  • How do you handle data drift in machine learning models deployed using mlflow? (advanced)
  • What are some security considerations to keep in mind when using mlflow in a production environment? (advanced)
  • How do you integrate mlflow with other tools in the machine learning ecosystem? (medium)
  • Describe a situation where you had to optimize a machine learning model using mlflow. (advanced)

Closing Remark

As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies