Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
27 - 35 Lacs
chennai
Work from Office
• Proficiency with Python (Pandas, NumPy), SQL & Java • Experience with LLMs, Lang Chain, & Generative AI technologies • Familiarity with ML frameworks (TensorFlow, PyTorch) & data engineering tools (Spark, Kafka) • Build ML, NLP & recommender models Required Candidate profile • Understanding of key data engineering concepts like data lakes, columnar formats, ETL tools, & BI tools • ML, NLP, Recommender system, personalization, Segmentation, microservice architecture & API
Posted 22 hours ago
0 years
0 Lacs
pune, maharashtra, india
Remote
neoBIM is a well-funded start-up software company revolutionizing the way architects design buildings with our innovative BIM (Building Information Modelling) software. As we continue to grow, we are building a small and talented team of developers to drive our software forward. Tasks We are looking for a highly skilled Generative AI Developer to join our AI team. The ideal candidate should have strong expertise in deep learning, large language models (LLMs), multimodal AI, and generative models (GANs, VAEs, Diffusion Models, or similar techniques) . This role offers the opportunity to work on cutting-edge AI solutions, from training models to deploying AI-driven applications that redefine automation and intelligence. Develop, fine-tune, and optimize Generative AI models , including LLMs, GANs, VAEs, Diffusion Models, and Transformer-based architectures . Work with large-scale datasets and design self-supervised or semi-supervised learning pipelines . Implement multimodal AI systems that combine text, images, audio, and structured data. Optimize AI model inference for real-time applications and large-scale deployment. Build AI-driven applications for BIM (Building Information Modeling), content generation, and automation . Collaborate with data scientists, software engineers, and domain experts to integrate AI into production. Stay ahead of AI research trends and incorporate state-of-the-art methodologies . Deploy models using cloud-based ML pipelines (AWS/GCP/Azure) and edge computing solutions . Requirements Must-Have Skills Strong programming skills in Python (PyTorch, TensorFlow, JAX, or equivalent). Experience in training and fine-tuning Large Language Models (LLMs) like GPT, BERT, LLaMA, or Mixtral . Expertise in Generative AI techniques , including Diffusion Models (e.g., Stable Diffusion, DALL-E, Imagen), GANs, VAEs . Hands-on experience with transformer-based architectures (e.g., Vision Transformers, BERT, T5, GPT, etc.) . Experience with MLOps frameworks for scaling AI applications (Docker, Kubernetes, MLflow, etc.). Proficiency in data preprocessing, feature engineering, and AI pipeline development . Strong background in mathematics, statistics, and optimization related to deep learning. Good-to-Have Skills Experience in NeRFs (Neural Radiance Fields) for 3D generative AI . Knowledge of AI for Architecture, Engineering, and Construction (AEC) . Understanding of distributed computing (Ray, Spark, or Tensor Processing Units). Familiarity with AI model compression and inference optimization (ONNX, TensorRT, quantization techniques) . Experience in cloud-based AI development (AWS/GCP/Azure) . Benefits Work on high-impact AI projects at the cutting edge of Generative AI . Competitive salary with growth opportunities. Access to high-end computing resources for AI training & development. A collaborative, research-driven culture focused on innovation & real-world impact . Flexible work environment with remote options.
Posted 22 hours ago
10.0 years
0 Lacs
karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role As an engineer in this team, you will play an integral role as we build out our ML Platform & GenAI Studio from the ground up. Since the launch of ChatGPT, # Phishing attacks has increased by 138% and hence the ML platform is a critical capability for Crowdstrike in its fight against bad actors. For this mission we are building a team in Bangalore. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloguing, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modelling attack paths for IT assets. Location: Bangalore Candidates must be comfortable to visit office once a week What You’ll Do Help design, build and facilitate adoption of a modern ML platform including support for use cases like GenAI Understand current ML workflows, anticipate future needs and identify common patterns and exploit opportunities to templatize into repeatable components for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation, training and inference pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Champion software development best practices around building distributed systems Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. /MS in Computer Science or a related field and 10+ years related experience; or M.S. with 8+ years of experience; 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning workflows from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labelled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Bonus Points Critical Skills Needed for Role: Distributed Systems Knowledge Data/ML Platform Experience Experience With The Following Is Desirable Go Iceberg (highly desirable) Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance.
Posted 23 hours ago
5.0 years
0 Lacs
chennai, tamil nadu, india
Remote
AI Engineer Location: Chennai Teynampet Experience: 5+ Years (Hands-On Data Science & AI) Employment Type: Full-Time Domain: US Mortgage / Lending Automation Shift:2pm to 11pm. About the Role We are seeking a skilled AI Engineer with deep hands-on experience in ML, NLP, LLMs, GenAI, and Agentic AI to join our technology team focused on automating US mortgage loan processing workflows. This role involves designing intelligent, scalable solutions using cloud-native MLOps and advanced AI/LLM orchestration tools on Azure. Key Responsibilities Design and implement ML/NLP/GenAI pipelines for automating loan origination, underwriting, and document intelligence processes. Develop and deploy LLM-based solutions using tools like Ollama, OpenAI, HuggingFace, and integrate via LangChain or similar frameworks. Build and orchestrate Agentic AI systems (e.g., using LangChain Agents, AutoGen, CrewAI) to enable autonomous decision-making, task planning, and loan-processing agents. Fine-tune domain-specific LLMs and embeddings for intelligent document classification, summarization, and question answering. Develop end-to-end MLOps workflows for scalable training, testing, and monitoring of models. Deploy AI models and microservices using Azure (Azure ML, Functions, Blob Storage, App Services). Work with cross-functional teams to ensure compliance, explainability, and effectiveness of AI solutions. Leverage prompt engineering, RAG (Retrieval-Augmented Generation), and vector stores for contextual mortgage document workflows. Required Skills & Qualifications 5+ years of hands-on experience in Machine Learning, NLP, or LLM-based systems. Proven expertise with LLMs (e.g., OpenAI, Ollama, GPT-4, Mistral) and LangChain / AutoGen / Agentic AI design. Proficiency in Python, Scikit-learn, PyTorch/TensorFlow, Spacy, HuggingFace Transformers. Strong knowledge of Azure Cloud – including Azure ML, Azure Functions, Azure DevOps. Experience with MLOps tools like MLflow, Azure ML pipelines, or Kubeflow. Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate). Experience deploying models into production-scale environments. Nice to Have Understanding of US mortgage or lending workflows (1003, 1008, bank statements, etc.). Experience with OCR, document intelligence tools (Azure Form Recognizer, Amazon Textract). Exposure to Agentic AI concepts such as autonomous agents, planning, memory chaining. Knowledge of privacy and compliance frameworks (HIPAA, SOC2) in AI deployments. What We Offer Opportunity to lead GenAI and Agentic AI initiatives for mortgage automation. Access to top-tier tools, frameworks, and cloud platforms. Remote flexibility and career growth in an innovation-first team. Impactful work transforming legacy loan operations through AI.
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Data Scientist at Texas Instruments, you will be a key player in the Demand Analytics team, focusing on shaping and executing demand planning and inventory buffer strategies. Working alongside a team of technical professionals, including application developers, system architects, data scientists, and data engineers, you will be responsible for solving complex business problems through innovative solutions that drive tangible business value. Your role will involve portfolio management for demand forecasting algorithms, generation of inventory buffer targets, segmentation of products, simulation/validation frameworks, and ensuring security and interoperability between capabilities. Key Responsibilities: - Engage strategically with stakeholder groups to align with TI's business strategy and goals - Communicate complex technical concepts effectively to influence final business outcomes - Collaborate with cross-functional teams to identify and prioritize actionable insights - Build scalable and modular technology stacks using modern technologies - Conduct simulations with various models to determine the best fit of algorithms - Research, experiment, and implement new approaches and models in line with business strategy - Lead data acquisition and engineering efforts - Develop and apply machine learning, AI, and data engineering frameworks - Write and debug code for complex development projects - Evaluate and determine the best modeling techniques for different scenarios Qualifications: Minimum requirements: - MS or PhD in a quantitative field or equivalent practical experience - 8+ years of professional experience in data science or related roles - 5+ years of hands-on experience developing and deploying time series forecasting models - Deep understanding of supply chain concepts like demand forecasting and inventory management - Proficiency in Python and core data science libraries - Experience taking machine learning models from prototype to production Preferred qualifications: - Experience with MLOps tools and platforms - Practical experience with cloud data science platforms - Familiarity with advanced forecasting techniques and NLP - Strong SQL skills and experience with large-scale data warehousing solutions About Texas Instruments: Texas Instruments is a global semiconductor company that designs, manufactures, and sells analog and embedded processing chips for various markets. At TI, we are passionate about creating a better world by making electronics more affordable through semiconductors. Our commitment to innovation drives us to build technology that is reliable, affordable, and energy-efficient, enabling semiconductors to be used in electronics everywhere. Join TI to engineer your future and collaborate with some of the brightest minds to shape the future of electronics. Embrace diversity and inclusion at TI, where every voice is valued and contributes to our strength as an organization. If you are ready to make an impact in the field of data science, apply to join our team at Texas Instruments.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Lead Software Engineer at our company, you will be responsible for designing, building, and scaling high-performance infrastructure with expertise in ML Ops, distributed systems, and platform engineering. Your key responsibilities will include: - Designing and developing scalable, secure, and reliable microservices using Golang and Python. - Building and maintaining containerized environments using Docker and orchestrating them with Kubernetes. - Implementing CI/CD pipelines with Jenkins for automated testing, deployment, and monitoring. - Managing ML workflows with MLflow to ensure reproducibility, versioning, and deployment of machine learning models. - Leveraging Temporal for orchestrating complex workflows and ensuring fault-tolerant execution of distributed systems. - Working with AWS cloud services (EC2, S3, IAM, basics of networking) to deploy and manage scalable infrastructure. - Collaborating with data science and software teams to bridge the gap between ML research and production systems. - Ensuring system reliability and observability through monitoring, logging, and performance optimization. - Mentoring junior engineers and leading best practices for ML Ops, DevOps, and system design. To be successful in this role, you should have: - Minimum 5+ years of experience. - Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. - Strong programming skills in Golang and Python. - Hands-on experience with Kubernetes and Docker in production environments. - Proven experience in microservices architecture and distributed systems design. - Good understanding of AWS fundamentals (EC2, S3, IAM, networking basics). - Experience with MLflow for ML model tracking, management, and deployment. - Proficiency in CI/CD tools (preferably Jenkins). - Knowledge of Temporal or similar workflow orchestration tools. - Strong problem-solving and debugging skills in distributed systems. - Excellent communication and leadership skills with experience mentoring engineers.,
Posted 1 day ago
0 years
0 Lacs
india
Remote
AI/ML Engineer Internship (6 Months) – India About SvaraAI SvaraAI is an AI-powered outbound and revenue workspace. We bring everything reps need to go from signal → outreach → meeting in one place: Leads & Enrichment: consolidate contacts from sheets, forms, LinkedIn, and CRM; de-dupe and score. AI Personalization: context-aware first lines and replies tuned to your ICP and tone. Sequencing: email + LinkedIn steps with automatic follow-ups and smart throttling. Inbox & Analytics: shared conversations, pipeline health, deliverability, and campaign insights. Connectors: HubSpot/Salesforce, Google Workspace, webhooks, and export APIs. We’re replacing 5+ tools with one clean workspace, built for small, focused B2B teams that care about speed, precision, and privacy. Role Description We are hiring AI/ML Engineer Interns for a 6-month internship program. Duration: 6 months Stipend: First 3 months unpaid, next 3 months paid (performance-based after internal assessment). Location: Remote (India) As an AI/ML Engineer Intern at SvaraAI, you will work on building and improving core ML-powered features that directly impact how businesses do outbound sales. You’ll help design, implement, and optimize models for personalization, intent detection, scoring, and conversational AI — working closely with our engineering and product teams. Tech Stack Required Languages & Libraries: Python, PyTorch/TensorFlow, Scikit-learn, LightGBM, Hugging Face Transformers Data Handling: Pandas, NumPy, SQL, feature engineering, dataset curation/labeling ML/AI Concepts: NLP (classification, summarization, embeddings, sentiment), supervised/unsupervised learning, anomaly detection, heuristics + ML hybrids Deployment Tools: FastAPI/Flask, Docker, REST APIs, Git/GitHub, CI/CD pipelines Good to Have : Prompt engineering, vector databases (Pinecone/Weaviate/FAISS), experiment tracking (Weights & Biases, MLflow), LLM fine-tuning What You’ll Work On (Summary) As an AI/ML Engineer Intern at SvaraAI, you’ll get hands-on experience designing and deploying ML models for personalization, lead scoring, and conversational intelligence. Data Foundations: Data audit & labeling plan, intent features, training set prep Early Models: Scoring v0 (rule-based), reply classifier (fine-tune), baseline heuristics Generative AI: First-line generator prompts v1, assistive reply drafts, summarization & sentiment analysis of threads ML Advancements: Scoring v1 (LightGBM/logit), re-rank personalization, anomaly detection in delivery Lead Intelligence: Lead routing model v1, cold-start heuristics v2 Prompt & Safety Systems: Prompt library evaluations, guardrails for reliability You’ll collaborate with the team to ship models and AI-driven features that power personalization, engagement, and automation for real-world SaaS users. Coding & Research Principles We Follow KISS (Keep It Simple, Stupid) – start simple, improve iteratively. YAGNI (You Aren’t Gonna Need It) – avoid premature complexity. DRY (Don’t Repeat Yourself) – reusable pipelines & utilities. Reproducibility – experiments tracked, results repeatable. Test Early, Test Often – validate assumptions with metrics & benchmarks. Qualifications Strong understanding of Machine Learning and NLP fundamentals. Hands-on experience with Python ML ecosystem (scikit-learn, PyTorch/TensorFlow, Hugging Face). Familiarity with classification, regression, embeddings, and text generation models. Knowledge of data preprocessing, labeling, and feature engineering. Good problem-solving and debugging skills. Strong teamwork and communication abilities. Currently pursuing or recently completed a Bachelor’s/Master’s in Computer Science, AI/ML, or related field. Previous ML projects, internships, or GitHub contributions are a big plus. Hiring Process Application: Selected candidates will be contacted via LinkedIn message with a Google Form. Task Round: A time-limited ML/NLP coding task will be assigned. Submissions must be on time. Interview: Shortlisted candidates will be invited for an interview with our engineering team. Onboarding: Successful candidates will begin the internship program. What Success Looks Like Building scalable, well-documented ML pipelines. Delivering production-ready models that improve personalization & reply rates. Writing clean, efficient, and maintainable code. Collaborating effectively with engineers & product teams. Learning advanced ML techniques while shipping real impact. If you’re passionate about AI/ML, NLP, and LLM-powered applications, excited to apply your skills to real-world SaaS challenges, and want to sharpen your expertise in building production ML systems, we’d love to hear from you!
Posted 1 day ago
4.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Job Title: Data Scientist – Agentic AI & MLOps Location: Bangalore - Hybrid (3 days work from office, 2 days from home) About Us: Our client delivers next-generation security analytics and operations management. They secure organisations worldwide by staying ahead of cyber threats, leveraging AI-reinforced capabilities for unparalleled protection. Job Overview: We’re seeking a Senior Data Scientist to architect agentic AI solutions and own the full ML lifecycle—from proof-of-concept to production. You’ll operationalise LLMs, build agentic workflows, implement MLOps best practices, and design multi-agent systems for cybersecurity tasks. Key Responsibilities: Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerised with Docker and Kubernetes. Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyse performance and optimise for cost and SLA compliance. Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI or related quantitative discipline. 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI or equivalent agentic frameworks. Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). Infrastructure as Code skills with Terraform. Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. Solid data science fundamentals: feature engineering, model evaluation, data ingestion. Understanding of cybersecurity principles, SIEM data, and incident response. Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: AWS certifications (Solutions Architect, Developer Associate). Experience with Model Context Protocol (MCP) and RAG integrations. Familiarity with workflow orchestration tools (Apache Airflow). Experience with time series analysis, anomaly detection, and machine learning.
Posted 1 day ago
0.0 - 3.0 years
0 - 0 Lacs
hyderabad
Work from Office
Seeking an MLOps Engineer to design, deploy, and monitor ML systems. You’ll ensure models are reliable, scalable, and easy to manage, while building tools that support teams and improve workflows. Required Candidate profile Looking for 3+ yrs exp in DevOps/MLOps/ML/Data Eng, strong Python, Git, CI/CD, Docker, K8s, cloud (AWS/GCP/Azure).Plus MLflow, Kubeflow, Airflow, PySpark; bonus Kafka, ArgoCD, Helm, Java, GPU.
Posted 1 day ago
3.0 years
30 - 40 Lacs
pune/pimpri-chinchwad area
Remote
Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 day ago
1.0 - 2.0 years
0 Lacs
gurugram, haryana, india
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company and a leader in the convenience store and fuel space with over 16,700 stores. It has footprints across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Associate ML Ops Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. _____________________________________________________________________________________________________________ Location: Cyber Hub, Gurugram, Haryana (5 days in office) Job Type: Permanent, Full-Time (40 Hours) Reports To: Senior Manager Data Science ___________________________________________________________________________________________________________ About The Role The incumbent will be responsible for implementing Azure data services to deliver scalable and sustainable solutions, build model deployment and monitor pipelines to meet business needs. Roles & Responsibilities Development and Integration Collaborate with data scientists to deploy ML models into production environments Implement and maintain CI/CD pipelines for machine learning workflows Use version control tools (e.g., Git) and ML lifecycle management tools (e.g., MLflow) for model tracking, versioning, and management. Design, build as well as optimize applications containerization and orchestration with Docker and Kubernetes and cloud platforms like AWS or Azure Automation & Monitoring Automating pipelines using understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Implement model monitoring and alerting systems to track model performance, accuracy, and data drift in production environments. Collaboration and Communication Work closely with data scientists to ensure that models are production-ready Collaborate with Data Engineering and Tech teams to ensure infrastructure is optimized for scaling ML applications. Optimization and Scaling Optimize ML pipelines for performance and cost-effectiveness Operational Excellence Help the Data teams leverage best practices to implement Enterprise level solutions. Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Helping to define common coding standards and model monitoring performance best practices Continuously evaluate the latest packages and frameworks in the ML ecosystem Build automated model deployment data engineering pipelines from plain Python/PySpark mode Stakeholder Engagement Collaborate with Data Scientists, Data Engineers, cloud platform and application engineers to create and implement cloud policies and governance for ML model life cycle. Job Requirements Education & Relevant Experience Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) 1-2 years of relevant working experience in MLOps Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Knowledge of core computer science concepts such as common data structures and algorithms, OOPs Programming languages (R, Python, PySpark, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Exposure to ETL tools and version controlling Experience in building and maintaining CI/CD pipelines for ML models. Understanding of machine-learning, information retrieval or recommendation systems Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, GitLab).
Posted 1 day ago
3.0 - 4.0 years
0 Lacs
gurugram, haryana, india
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space, it has footprint across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Data Scientist/Senior Data Scientist will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. _____________________________________________________________________________ Department: Data & Analytics Location: Cyber Hub, Gurugram, Haryana (5 days in office) Job Type: Permanent, Full-Time (40 Hours) Reports To: Senior Manager Data Science & Analytics _____________________________________________________________________________ About The Role The incumbent will be responsible for delivering advanced analytics projects that drive business results including interpreting business, selecting the appropriate methodology, data cleaning, exploratory data analysis, model building, and creation of polished deliverables. Roles & Responsibilities Analytics & Strategy Analyse large-scale structured and unstructured data; develop deep-dive analyses and machine learning models in retail, marketing, merchandising, and other areas of the business Utilize data mining, statistical and machine learning techniques to derive business value from store, product, operations, financial, and customer transactional data Apply multiple algorithms or architectures and recommend the best model with in-depth description to evangelize data-driven business decisions Utilize cloud setup to extract processed data for statistical modelling and big data analysis, and visualization tools to represent large sets of time series/cross-sectional data Operational Excellence Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Structure hypothesis, build thoughtful analyses, develop underlying data models and bring clarity to previously undefined problems Partner with Data Engineering to build, design and maintain core data infrastructure, pipelines and data workflows to automate dashboards and analyses. Stakeholder Engagement Working collaboratively across multiple sets of stakeholders – Business functions, Data Engineers, Data Visualization experts to deliver on project deliverables Articulate complex data science models to business teams and present the insights in easily understandable and innovative formats Job Requirements Education Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) Relevant Experience 3 - 4 years for Data Scientist Relevant working experience in a data science/advanced analytics role Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Functional Analytics (Supply chain analytics, Marketing Analytics, Customer Analytics, etc.) Statistical modelling using Analytical tools (R, Python, KNIME, etc.) Knowledge of statistics and experimental design (A/B testing, hypothesis testing, causal inference) Practical experience building scalable ML models, feature engineering, model evaluation metrics, and statistical inference. Practical experience deploying models using MLOps tools and practices (e.g., MLflow, DVC, Docker, etc.) Strong coding proficiency in Python (Pandas, Scikit-learn, PyTorch/TensorFlow, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Business intelligence & reporting (Power BI, Tableau, Alteryx, etc.) Microsoft Office applications (MS Excel, etc.)
Posted 1 day ago
5.0 years
0 Lacs
gurgaon
On-site
As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. Are you ready to innovate and create transformative AI solutions? Blue Planet, a division of Ciena, is seeking a dynamic AI Engineer to drive advancements in AI technologies and frameworks, contributing to cutting-edge developments across diverse domains. How You Will Contribute Design and develop AI agents using frameworks like LangChain, LangGraph, Langflow, and MCP Servers. Fine-tune and optimize large language models (LLMs) such as GPT models, Llama, and others for diverse applications. Implement Retrieval-Augmented Generation (RAG) techniques and integrate vector databases like Qdrant and ChromaDB. Enhance AI agent operations with tools like langfuse and litellm, ensuring robust security and guardrails. Leverage cloud platforms such as AWS, Google Cloud, and Azure for scalable AI solution deployment. Build and manage databases using Postgres, Neo4j, and Clickhouse for efficient data handling. Utilize technologies like Apache Airflow, Redis, Mlflow, Minio, Apache Kedro, and PySpark for workflow optimization and data processing. The Must Haves 5+ years of experience in AI engineering with proficiency in Python. Expertise in AI frameworks such as LangChain and LangGraph, or similar agentic AI frameworks. Proven experience in fine-tuning large language models (LLMs) and implementing Retrieval-Augmented Generation (RAG) techniques. Strong knowledge of vector databases and AI agent security protocols. Familiarity with cloud platforms and database technologies. Experience with prompt engineering techniques and CI/CD processes. Background in leveraging tools like Apache Airflow, Redis, Mlflow, Minio, and Apache Kedro. Nice to Haves Experience with PySpark for large-scale data processing. Knowledge of emerging AI frameworks and technologies. Advanced understanding of AI agent optimization and collaborative systems. Exposure to cloud-native tools and services for AI deployment. Familiarity with advanced database architectures and tools. Interest in contributing to open-source AI initiatives. Passion for driving innovation in AI technologies. #LI-FA Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require.
Posted 1 day ago
3.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Position Overview: The Databricks Data Engineer role is ideal for a skilled and motivated Databricks Data Engineer to design, build, and optimize data pipelines and analytics solutions on the Databricks Lakehouse Platform. This role is ideal for candidates who have experience with big data technologies and a passion for building robust data systems. Key Responsibilities: Develop and maintain scalable data pipelines using Databricks and Apache Spark (primarily with PySpark). Build and manage Delta Lake tables and optimize them for analytics and machine learning workloads. Integrate data from diverse sources including APIs, cloud storage, databases, and streaming platforms like Kafka. Transform raw data into clean, structured formats using best practices in ETL/ELT. Implement data quality checks and ensure consistency, completeness, and accuracy of data sets. Collaborate with data scientists, analysts, and business stakeholders to meet data requirements for reporting and analytics. Automate workflows using Databricks Workflows, job clusters, and scheduling tools. Monitor and optimize Spark jobs for performance and cost efficiency. Document and maintain data models, pipeline designs, and technical workflows. Qualifications: Education: Bachelor’s degree in Computer Science, Engineering, or a related field. Experience: 3+ years of experience in data engineering or software development. 1+ years of hands-on experience working with Databricks and Apache Spark in production environments Skills: Proficient in PySpark and SQL. Solid understanding of data warehousing, ETL/ELT design, and big data best practices. Experience with cloud platforms such as Azure, AWS, or GCP. Experience with Delta Lake, Databricks Workflows, and MLflow. Exposure to data governance tools like Unity Catalog or Purview. Experience with real-time data processing (e.g., using Kafka, Structured Streaming). Familiarity with version control systems (e.g., Git) and CI/CD practices. Familiarity with data visualization tools (e.g., Power BI, Tableau) and BI integration Certifications : Databricks Certified Data Engineer Associate or Professional (a plus).
Posted 1 day ago
5.0 years
0 Lacs
chennai, tamil nadu, india
Remote
AI Engineer Location: Chennai Teynampet Experience: 5+ Years (Hands-On Data Science & AI) Employment Type: Full-Time Domain: US Mortgage / Lending Automation Shift:2pm to 11pm. About the Role We are seeking a skilled AI Engineer with deep hands-on experience in ML, NLP, LLMs, GenAI, and Agentic AI to join our technology team focused on automating US mortgage loan processing workflows. This role involves designing intelligent, scalable solutions using cloud-native MLOps and advanced AI/LLM orchestration tools on Azure. Key Responsibilities Design and implement ML/NLP/GenAI pipelines for automating loan origination, underwriting, and document intelligence processes. Develop and deploy LLM-based solutions using tools like Ollama, OpenAI, HuggingFace, and integrate via LangChain or similar frameworks. Build and orchestrate Agentic AI systems (e.g., using LangChain Agents, AutoGen, CrewAI) to enable autonomous decision-making, task planning, and loan-processing agents. Fine-tune domain-specific LLMs and embeddings for intelligent document classification, summarization, and question answering. Develop end-to-end MLOps workflows for scalable training, testing, and monitoring of models. Deploy AI models and microservices using Azure (Azure ML, Functions, Blob Storage, App Services). Work with cross-functional teams to ensure compliance, explainability, and effectiveness of AI solutions. Leverage prompt engineering, RAG (Retrieval-Augmented Generation), and vector stores for contextual mortgage document workflows. Required Skills & Qualifications 5+ years of hands-on experience in Machine Learning, NLP, or LLM-based systems. Proven expertise with LLMs (e.g., OpenAI, Ollama, GPT-4, Mistral) and LangChain / AutoGen / Agentic AI design. Proficiency in Python, Scikit-learn, PyTorch/TensorFlow, Spacy, HuggingFace Transformers. Strong knowledge of Azure Cloud – including Azure ML, Azure Functions, Azure DevOps. Experience with MLOps tools like MLflow, Azure ML pipelines, or Kubeflow. Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate). Experience deploying models into production-scale environments. Nice to Have Understanding of US mortgage or lending workflows (1003, 1008, bank statements, etc.). Experience with OCR, document intelligence tools (Azure Form Recognizer, Amazon Textract). Exposure to Agentic AI concepts such as autonomous agents, planning, memory chaining. Knowledge of privacy and compliance frameworks (HIPAA, SOC2) in AI deployments. What We Offer Opportunity to lead GenAI and Agentic AI initiatives for mortgage automation. Access to top-tier tools, frameworks, and cloud platforms. Remote flexibility and career growth in an innovation-first team. Impactful work transforming legacy loan operations through AI.
Posted 1 day ago
5.0 years
0 Lacs
hyderabad, telangana, india
On-site
Position: AIML_Python Enginner Kothapet_Hyderabad _Hybrid.( 4 days a week onsite) Contract to hire fulltime to client. We’re mainly looking for someone who can go 4 days a week onsite in Hyderabad. Role Description: 5+ years of python experience for scripting ML workflows to deploy ML Pipelines as real time, batch, event triggered, edge deployment 4+ years of experience in using AWS sagemaker for deployment of ML pipelines and ML Models using Sagemaker piplines, Sagemaker mlflow, Sagemaker Feature Store..etc. 3+ years of development of apis using FastAPI, Flask, Django 3+ year of experience in ML frameworks & tools like scikit-learn, PyTorch, xgboost, lightgbm, mlflow. Solid understanding of ML lifecycle: model development, training, validation, deployment and monitoring Solid understanding of CI/CD pipelines specifically for ML workflows using bitbucket, Jenkins, Nexus, AUTOSYS for scheduling Experience with ETL process for ML pipelines with PySpark, Kafka, AWS EMR Serverless Good to have experience in H2O.ai Good to have experience in containerization using Docker and Orchestration using Kubernetes.
Posted 2 days ago
8.0 years
0 Lacs
india
Remote
Job Title: Data Engineer Location: Remote Job Type: Full-Time Experience Required: 8+ Years Company: BitsAtom Technologies How to Apply : resourcing@bitsatom.com Job Summary: BitsAtom Technologies is seeking a skilled and experienced Data Engineer to join our growing team. The ideal candidate will bring 7+ years of experience in data engineering with strong expertise in Azure, SSIS, Databricks, Python, and data pipeline architecture. The role involves leading technical initiatives, building scalable and reliable data systems, and working closely with cross-functional teams to support business analytics and insights. Key Responsibilities: Design, build, and maintain scalable ETL/ELT pipelines using Azure Data Factory, Databricks, PySpark, and SQL. Engineer data workflows integrating structured and unstructured data from Azure Data Lake, Synapse, SQL Server, and external APIs. Optimize data storage and compute performance in Databricks and Delta Lake environments. Implement data modeling and transformation logic aligned with analytics, reporting, and machine learning requirements. Lead end-to-end data solutioning on Azure with a focus on performance and availability. Collaborate with data scientists, business analysts, and stakeholders to gather requirements and deliver validated datasets. Develop CI/CD pipelines using Azure DevOps or GitHub Actions. Ensure compliance with data governance, security, and regulatory standards. Monitor, troubleshoot, and resolve issues in data pipelines. Mentor junior engineers and provide technical leadership. Required Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent work experience). 7+ years of professional experience in data engineering. Deep expertise in Databricks (Delta Lake, orchestration, notebooks). Strong experience with Azure Cloud Services (ADF, ADLS Gen2, Key Vault). Proficiency in Python, PySpark, and SQL. Strong understanding of data lakehouses, ETL/ELT architecture, and orchestration. Excellent communication and collaboration skills. Ability to translate complex technical data concepts to business language. Preferred Qualifications: Experience with data mesh architecture. Familiarity with event-driven systems (Azure Event Hubs, Kafka). Exposure to MLflow, Unity Catalog, or Databricks Workflows. Experience with Power BI or similar BI tools. Familiarity with Terraform or Infrastructure-as-Code tools. Certifications (Preferred): Azure Data Engineer Associate (DP-203) Databricks Certified Data Engineer
Posted 2 days ago
5.0 - 10.0 years
15 - 30 Lacs
hyderabad
Work from Office
We are looking for a skilled Python Architect cum Developer to lead the design of real-time, event-driven microservices and AI-integrated backend systems. Must have strong expertise in Python, Fast API ,Kafka, and cloud-native AWS deployments. Required Candidate profile Expert in Python, Fast API, Kafka, and AWS with experience in AI/ML integration and event-driven microservices.
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Scrum Master at Rabbit Digital Branding Solutions, you will play a crucial role in orchestrating end-to-end sprints across LLM dev, OS architecture, and AI product delivery. Your responsibilities will include aligning timelines between model training, GPU resource allocation, chip design milestones, and app releases. You will facilitate Agile rituals with precision and energy, ensuring that a diverse team is firing on all cylinders. Additionally, you will serve as the central node between product vision, engineering velocity, and research experimentation. It will be essential for you to maintain deep visibility using tools like JIRA, Notion, or ClickUp and lead from the front on delivery. Partnering with technical and product leadership to push the boundaries of what's possible will also be a key aspect of your role. Moreover, you will inspire a team culture of speed, accountability, and high-bandwidth learning. Key Responsibilities: - Orchestrate end-to-end sprints across LLM dev, OS architecture, and AI product delivery - Align timelines between model training, GPU resource allocation, chip design milestones, and app releases - Facilitate Agile rituals with precision and energy, keeping a diverse team firing on all cylinders - Serve as the central node between product vision, engineering velocity, and research experimentation - Maintain deep visibility using tools like JIRA, Notion, or ClickUp and lead from the front on delivery - Partner with technical and product leadership to push the boundaries of what's possible - Inspire a team culture of speed, accountability, and high-bandwidth learning Qualifications Required: - 5+ years of experience leading Agile/Scrum teams in tech-first environments - Deep understanding of software engineering workflows (Python, Flutter) and ML pipelines - Bonus: familiarity with hardware cycles, embedded systems, or OS-level architecture - Strong coordination across cross-disciplinary teams from AI researchers to hardware designers - Exposure to LLM tools like Hugging Face, MLflow, or model tuning platforms is a plus - Excellent communication and roadmap ownership skills - Scrum Master certifications (CSM, PSM) preferred - Background in CS, ECE, or equivalent technical domain Please note that the additional details of the company were not provided in the job description.,
Posted 2 days ago
7.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Description and Requirements Key Responsibilities : Lead end-to-end transitions of AI PoCs into production environments, managing the entire process from testing to final deployment. Configure, install, and validate AI systems using key platforms, including VMware ESXi and vSphere for server virtualization, Linux (Ubuntu/RHEL) and Windows Server for operating system integration, Docker and Kubernetes for containerization and orchestration of AI workloads. Conduct comprehensive performance benchmarking and AI inferencing tests to validate system performance in production. Optimize deployed AI models for accuracy, performance, and scalability to ensure they meet production-level requirements and customer expectations. Serve as the primary technical lead/SME for the AI POC deployment in enterprise environments, focusing on AI solutions powered by Nvidia GPUs. Work hands-on with Nvidia AI Enterprise and GPU-accelerated workloads, ensuring efficient deployment and model performance using frameworks such as PyTorch and TensorFlow. Lead technical optimizations aimed at resource efficiency, ensuring that models are deployed effectively within the customer's infrastructure. Ensure the readiness of customer environments to handle, maintain, and scale AI solutions post-deployment. take ownership of AI project deployments, overseeing all phases from planning to final deployment, ensuring that timelines and deliverables are met. Collaborate with stakeholders, including cross-functional teams (e.g., Lenovo AI Application, solution architects), customers, and internal resources to coordinate deployments and deliver results on schedule. Implement risk management strategies and develop contingency plans to mitigate potential issues such as hardware failures, network bottlenecks, and software incompatibilities. Maintain ongoing, transparent communication with all relevant stakeholders, providing updates on project status and addressing any issues or changes in scope. Experience : Overall experience 7-10 years Relevant experience of 2-4 years in deploying AI/ML models/ AI solutions using Nvidia GPUs in enterprise production environments. Demonstrated success in leading and managing complex AI infrastructure projects, including PoC transitions to production at scale. Technical Expertise: Experience in the area of Retrieval Augmented Generation (RAG), NVIDIA AI Enterprise, NVIDIA Inference Microservices (NIMs), Model Management, Kubernetes Extensive experience with Nvidia AI Enterprise, GPU-accelerated workloads, and AI/ML frameworks such as PyTorch and TensorFlow. Proficient in deploying AI solutions across enterprise platforms, including VMware ESXi, Docker, Kubernetes, and Linux (Ubuntu/RHEL) and Windows Server environments. MLOps proficiency with hands-on experience using tools such as Kubeflow, MLflow, or AWS SageMaker for managing the AI model lifecycle in production. Strong understanding of virtualization and containerization technologies to ensure robust and scalable deployments.
Posted 2 days ago
0.0 - 4.0 years
0 Lacs
kolkata, west bengal
On-site
Role Overview: As an AI & ML Engineer at GEOGO, you will be crafting software with a human touch. You will have the opportunity to work in a dynamic environment where innovation and creativity are valued. Join us in shaping digital ideas and products that impact thousands of users and touch people's lives. Key Responsibilities: - Develop AI and machine learning solutions using Python, PyTorch, TensorFlow, and MLflow. - Collaborate with the team to design and implement cutting-edge algorithms and models. - Work on projects that require a deep understanding of AI and machine learning concepts. - Stay updated on the latest trends and technologies in the AI and ML field to drive innovation. Qualifications Required: - Freshers or Mid-Senior Level candidates are welcome to apply. - Proficiency in Python, PyTorch, TensorFlow, and MLflow. - Strong knowledge of AI and machine learning concepts and algorithms. - Ability to work in a team environment and communicate effectively. - Prior experience in AI and ML projects is a plus. Join us at GEOGO and be part of a culture that values craftsmanship, open discussions, ownership, teamwork, and the "AND life" philosophy. We offer a workspace where you can be yourself and unleash your creativity to create products that people love. GEOGO is a startup recognized by the Department of Promotion of Industry and Internal Trade, Govt. of India, and we are committed to fostering innovation and excellence in the IT services and application development sector. Apply now and be a part of our team of passionate makers who are dedicated to reaching new levels of excellence every day.,
Posted 2 days ago
5.0 - 10.0 years
0 Lacs
karnataka
On-site
As an MLOps Engineer, you will play a crucial role in designing, building, and maintaining end-to-end MLOps pipelines for ML model training, testing, and deployment. Your responsibilities will include collaborating with Data Scientists to productionize ML models in Azure ML and Azure Databricks. Additionally, you will implement CI/CD pipelines for ML workflows using Azure DevOps, GitHub Actions, or Jenkins and automate infrastructure provisioning using IaC tools such as Terraform, ARM templates, or Bicep. Monitoring and managing deployed models using Azure Monitor, Application Insights, and MLflow will also be part of your daily tasks. Implementing best practices in model versioning, model registry, experiment tracking, and artifact management is essential to ensure the success of ML solutions deployed on Azure. Your role will also involve working closely with cross-functional teams, including Data Engineers, DevOps Engineers, and Data Scientists, to streamline ML delivery. Developing monitoring and alerting for ML model drift, data drift, and performance degradation will be critical for maintaining the efficiency and effectiveness of ML models. **Key Responsibilities:** - Design, build, and maintain end-to-end MLOps pipelines for ML model training, testing, and deployment. - Collaborate with Data Scientists to productionize ML models in Azure ML and Azure Databricks. - Implement CI/CD pipelines for ML workflows using Azure DevOps, GitHub Actions, or Jenkins. - Automate infrastructure provisioning using IaC tools (Terraform, ARM templates, or Bicep). - Monitor and manage deployed models using Azure Monitor, Application Insights, and MLflow. - Implement best practices in model versioning, model registry, experiment tracking, and artifact management. - Ensure security, compliance, and cost optimization of ML solutions deployed on Azure. - Work with cross-functional teams (Data Engineers, DevOps Engineers, Data Scientists) to streamline ML delivery. - Develop monitoring/alerting for ML model drift, data drift, and performance degradation. **Qualifications Required:** - 5-10 years of experience in programming: Python, SQL. - Experience with MLOps/DevOps Tools: MLflow, Azure DevOps, GitHub Actions, Docker, Kubernetes (AKS). - Proficiency in Azure Services: Azure ML, Azure Databricks, Azure Data Factory, Azure Storage, Azure Functions, Azure Event Hubs. - Experience with CI/CD pipelines for ML workflows and IaC using Terraform, ARM templates, or Bicep. - Familiarity with data handling tools such as Azure Data Lake, Blob Storage, and Synapse Analytics. - Strong knowledge of monitoring & logging tools like Azure Monitor, Prometheus/Grafana, and Application Insights. - Understanding of the ML lifecycle including data preprocessing, model training, deployment, and monitoring. This job prefers candidates with experience in Azure Kubernetes Service (AKS) for scalable model deployment, knowledge of feature stores and distributed training frameworks, familiarity with RAG (Retrieval Augmented Generation) pipelines, LLMOps, and Azure certifications like Azure AI Engineer Associate, Azure Data Scientist Associate, or Azure DevOps Engineer Expert.,
Posted 2 days ago
10.0 years
0 Lacs
mumbai metropolitan region
On-site
CSQ426R270 At Databricks, we are on a mission to empower our customers to solve the world's toughest data problems by utilizing the Databricks Data Intelligence Platform. As a Delivery Solutions Architect (DSA), you will play an important role during this journey. You will collaborate with our sales and field engineering teams to accelerate the adoption and growth of the Databricks Platform in your customers. You will also help ensure customer success by increasing focus and technical accountability to our most complex customers who need guidance to accelerate usage on Databricks workloads that they have already selected, helping them maximise the value they get from our platform and the return on investment. This is a hybrid technical and commercial role. It is commercial in the sense that you will drive growth in your assigned customers and use cases by leading your customers’ stakeholders, building executive relationships, orchestration of other focused/specialized teams within Databricks, and creating and driving plans and strategies for Databricks colleagues to build upon. This is in parallel to being technical, with expectations being that you become the post-sale technical lead across all Databricks products. This requires you to use your skills and technical credibility to engage and communicate at all levels with an organisation. You will report directly to a DSA Manager within the Field Engineering organization. The Impact You Will Have Engage with Solutions Architects to understand the full use case demand plan for prioritised customers Lead the post-technical win technical account strategy and execution plan for the majority of Databricks use cases within our most strategic accounts Be the accountable technical leader assigned to specific use cases and customer(s) across multiple selling teams and internal stakeholders, creating certainty from uncertainty and driving onboarding, enablement, success, go-live and healthy consumption of the workloads where the customer has made the decision to consume Databricks Be the first contact for any technical issues or questions related to production/go live status of agreed upon use cases within an account, oftentimes services multiple use cases within the largest and most complex organizations Leverage both Shared Services, User Education, Onboarding/Technical Services and Support resources, along with escalating to expert-level technical experts to build the right tasks that are beyond your scope of activities or expertise Create, own and execute a point-of-view as to how key use cases can be accelerated into production, coordinating with Professional Services (PS) resources on the delivery of PS Engagement proposals Navigate Databricks Product and Engineering teams for new product Innovations, private previews and upgrade needs Develop an execution plan that covers all activities of all customer-facing technical roles and teams to cover the below work streams: Main use cases moving from ‘win’ to production Enablement/user growth plan Product adoption (strategy and activities to increase adoption of Databricks’ Lakehouse vision) Organic needs for current investment (e.g., cloud cost control, tuning and optimization) Executive and operational governance Provide internal and external updates - KPI reporting on the status of usage and customer health, covering investment status, important risks, product adoption and use case progression - to your Technical GM What We Look For 10+ years of experience where you have been accountable for technical project/program delivery within the domain of Data and AI and where you can contribute to technical debate and design choices with customers Programming experience in Python, SQL or Scala Experience in a customer-facing pre-sales, technical architecture, customer success, or consulting role Understanding of solution architecture related to distributed data systems Understanding of how to attribute business value and outcomes to specific project deliverables Technical program or project management, including account, stakeholder and resource management accountability Experience resolving complex and important escalations with senior customer executives Experience conducting open-ended discovery workshops, creating strategic roadmaps, conducting business analysis and managing the delivery of complex programmes/projects Track record of overachievement against quota, Goals or similar objective targets Bachelor's degree in Computer Science, Information Systems, Engineering, or equivalent experience through work experience Can travel up to 30% when needed About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 days ago
4.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Job Title: Data Scientist – Agentic AI & MLOps Location: Bangalore - Hybrid (3 days work from office, 2 days from home) About Us: Our client delivers next-generation security analytics and operations management. They secure organisations worldwide by staying ahead of cyber threats, leveraging AI-reinforced capabilities for unparalleled protection. Job Overview: We’re seeking a Senior Data Scientist to architect agentic AI solutions and own the full ML lifecycle—from proof-of-concept to production. You’ll operationalise LLMs, build agentic workflows, implement MLOps best practices, and design multi-agent systems for cybersecurity tasks. Key Responsibilities: Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerised with Docker and Kubernetes. Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyse performance and optimise for cost and SLA compliance. Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI or related quantitative discipline. 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI or equivalent agentic frameworks. Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). Infrastructure as Code skills with Terraform. Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. Solid data science fundamentals: feature engineering, model evaluation, data ingestion. Understanding of cybersecurity principles, SIEM data, and incident response. Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: AWS certifications (Solutions Architect, Developer Associate). Experience with Model Context Protocol (MCP) and RAG integrations. Familiarity with workflow orchestration tools (Apache Airflow). Experience with time series analysis, anomaly detection, and machine learning.
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
bengaluru
Work from Office
Develop and implement machine learning and deep learning models. Integrate AI solutions into applications and systems. Optimize model performance, scalability, and accuracy. Research and apply the latest AI tools, frameworks,
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.
These cities are known for their thriving tech industries and have a high demand for mlflow professionals.
The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
Salaries may vary based on factors such as location, company size, and specific job requirements.
A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager
With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.
In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)
Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.
As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |