Home
Jobs

722 Mlflow Jobs - Page 13

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

50 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

You will be an integral part of the Talent Acquisition Team and manage all interview scheduling and related logistics. You'll work with hiring teams and the larger Talent Acquisition function to help our candidates navigate our process and provide exemplary interview experiences for all. Most importantly, you will have fun while doing it. The Impact You Will Have: Coordinate phone and virtual interviews across Asia-Pacific (primarily India) Maintain our Applicant Tracking System (Greenhouse) and Scheduling Automation Platform (GoodTime), ensuring data accuracy Partner with teams (TA Partners, Sourcers, Hiring Teams, Candidate Experience Team) to learn about, prioritize, and fulfill hiring need Help implement and update recruiting processes by identifying opportunities for efficiencies Onboard new team members with TA leadership Establish relationships with candidates, hiring teams, and the greater TA organization to initiate projects of impact to refine our processes and improve our delivery What We Look For: 6+ months of Recruiting or Campus Coordination experience Ability to navigate internal relationships to achieve positive outcomes Working knowledge of applying data to make decisions Success with managing or supporting projects end-to-end About Databricks Databricks is the data and AI company. More than 5,000 organizations worldwide — including Comcast, Condé Nast, H&M, and over 40% of the Fortune 500— rely on Databricks' Lakehouse platform to unify their data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe. Founded by the original creators of Apache Spark™, Delta Lake and MLflow, Databricks is on a mission to help data teams solve the world's toughest problems. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

SLSQ426R161 As an Account Executive, your mission will be to help further build our India business, which is one of our fastest-growing markets in APJ. The Databricks Sales Team is driving growth through strategic and innovative partnerships with our customers, helping businesses thrive by solving the world's toughest problems with our solutions. You will be inspiring and guiding customers on their data journey, making organisations more collaborative and productive than ever before. You will play an important role in the business in India, with the opportunity to strategically build your territory in close partnership with the business leaders. Using your passion with technology and drive to build, you will help businesses all across India reach their full potential through the power of Databricks. You know how to sell innovation and change and can guide deals forward to compress decision cycles. You love understanding a product in-depth and are passionate about communicating its value to customers and partners. Always prospecting for new opportunities, you will close new accounts while growing our business in existing accounts. The Impact You Will Have Prospect for new customers Assess your existing customers and develop a strategy to identify and engage all buying centres Use a solution approach to selling and creating value for customers Identify the most viable use cases in each account to maximise Databricks' impact Orchestrate and work with teams to maximise the impact of the Databricks ecosystem on your territory Build value with all engagements to promote successful negotiations and close Promote the Databricks enterprise cloud data platform Be customer-focused by delivering technical and business results using the Databricks Platform Promote Teamwork What We Look For You have previously worked in an early-stage company, and you know how to navigate and be successful in a fast-growing organisation 5+ years of sales experience in SaaS/PaaS or big data companies Prior customer relationships with CIOs and important decision-makers Simply articulate intricate cloud technologies and big data 3+ years of experience exceeding sales quotas Success in closing new accounts while upselling existing accounts Bachelor's Degree Job location - Mumbai About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

SLSQ325R67 As part of our rapidly expanding Enterprise business, we are looking for a BFSI Leader to scale the business in the India region. You will lead a team of professionals and be responsible for multiplying consumption, new customer acquisition and developing the ecosystem. You will inherit a team of seasoned campaigners, passionate about building a data ecosystem in the India region, technically knowledgeable, and have a desire to help customers and partners succeed. You will report to the Head of Enterprise business, India Region. The Impact You Will Have Scale a team of motivated Enterprise Account Executives to increase growth in BFSI domain Inspire a culture of teamwork, leading with value, and achieving desired customers outcomes Develop trust-based relationships with customers and partners to ensure long-term success Encourage learning and ongoing understanding of technical product details and our future product roadmap Lead our BFSI Enterprise growth plans, ensure forecast accuracy and a predictable, high-growth business What We Look For You have the desire to build a collaborative, inspired team culture You live our core values: customer obsession, teamwork makes the dream work, own it, and let the data decide! Experience (15 or more years) building a high-growth sales team serving BFSI customers Experience in the Big Data, Cloud, or SaaS Sales Industry History of exceeding sales quota in similar high-growth Enterprise software companies Understanding of value selling and structured methodologies e.g. MEDDPICC, Challenger, Command of Message Knowledge of developing the partner ecosystem to help grow strategic enterprise territories Success implementing strategies for usage and booking-based sales revenue models Enterprise BFSI experience coupled with Cloud Data & AI, is most desirable. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less

Posted 1 week ago

Apply

20.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Abstract: Job Description A Machine Learning (ML) Developer is an expert in applying advanced AI/ML algorithms and techniques to solve complex problems, including building, training, and deploying machine learning models. This role focuses on creating and optimizing systems to automate processes such as image classification, speech recognition, market forecasting, and large language model (LLM) fine-tuning. Roles & Responsibilities Machine Learning Development: Design, build, train, and fine-tune machine learning and deep learning models, particularly in the context of large language models (LLMs). Data Pipeline Creation: Develop and manage efficient data pipelines for data preprocessing and feature engineering. MLOps Implementation: Create CI/CD pipelines to automate the deployment, monitoring, and updating of ML models in production environments. LLM Fine-Tuning: Fine-tune LLMs for specific applications and domains, leveraging frameworks like Hugging Face and open-source LLMs. Model Evaluation: Regularly evaluate model performance, accuracy, and reliability using statistical and computational techniques. Collaboration: Work in an Agile environment with cross-functional teams to integrate ML solutions into larger systems. Framework Utilization: Utilize machine learning frameworks such as TensorFlow, PyTorch, or Keras to develop scalable solutions. Data Management: Manage large datasets, ensure data quality, and design robust preprocessing pipelines. AI/ML Research: Stay updated on the latest advancements in AI/ML algorithms, tools, and techniques to implement cutting-edge solutions. Requirements Programming and Frameworks Fundamentals of SQL FastAPI framework PyTorch framework MMDetection framework Parallel Processing and Optimization Techniques for parallel execution and data processing Image and Data Processing Optical Character Recognition (OCR) Data processing and image manipulation Deep Learning Concepts Basics of neural networks and optimizers Convolutional Neural Networks (CNN) Region-based Convolutional Neural Networks (RCNN) Advanced AI and LLMs Prompt engineering principles Retrieval-Augmented Generation (RAG) using the LangChain framework Open-source Large Language Models (LLMs) MLOps and Deployment MLOps practices, including: MLflow Docker CI/CD pipelines GitLab Benefits 5-day working company. Quarterly rewards based on roadmap achievements and customers’ success. 20 Yearly leaves. 14 National Holidays Off. Cross-team work culture. Career Development & Training Programs. Employee Referral Benefits. Birthday/Anniversary/Festival Celebrations. Compensatory Off Benefits. Paid half-day leaves on special occasions of Birthdays & anniversaries. Meals while working extra. Yearly day-outing activities. Yearly Achievement Awards. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job description Job Role: Ai Engineer Experience: 3 to 5 Years Location : Client Office – Pune, India Job Type : Full-Time Department : Artificial Intelligence / Engineering Work Mode : On-site at client location About the Role We are seeking a highly skilled and versatile Senior AI Engineer with over 3 to 5 years of hands-on experience to join our client’s team in Pune. This role focuses on designing, developing, and deploying cutting-edge AI and machine learning solutions for high-scale, high-concurrency applications where security, scalability, and performance are paramount. You will work closely with cross-functional teams, including data scientists, DevOps engineers, security specialists, and business stakeholders, to deliver robust AI solutions that drive measurable business impact in dynamic, large-scale environments. Job Summary: We are seeking a passionate and experienced Node.js Developer to join our backend engineering team. As a key contributor, you will be responsible for building scalable, high-performance APIs, microservices, and backend systems that power our products and services. You will leverage modern technologies and best practices to design and implement robust, maintainable, and efficient solutions. You should have a deep understanding of Node.js, NestJS, Express.js, along with hands-on experience designing and building complex backend systems. Key Responsibilities Architect, develop, and deploy advanced machine learning and deep learning models across domains like NLP, computer vision, predictive analytics, or reinforcement learning, ensuring scalability and performance under high-traffic conditions. Preprocess, clean, and analyze large-scale structured and unstructured datasets using advanced statistical, ML, and big data techniques. Collaborate with data engineering and DevOps teams to integrate AI/ML models into production-grade pipelines, ensuring seamless operation under high concurrency. Optimize models for latency, throughput, accuracy, and resource efficiency, leveraging distributed computing and parallel processing where necessary. Implement robust security measures, including data encryption, secure model deployment, and adherence to compliance standards (e.g., GDPR, CCPA). Partner with client-side technical teams to translate complex business requirements into scalable, secure AI-driven solutions. Stay at the forefront of AI/ML advancements, experimenting with emerging tools, frameworks, and techniques (e.g., generative AI, federated learning, or AutoML). Write clean, modular, and maintainable code, along with comprehensive documentation and reports for model explainability, reproducibility, and auditability. Proactively monitor and maintain deployed models, ensuring reliability and performance in production environments with millions of concurrent users. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related technical field. 5+ years of experience building and deploying AI/ML models in production environments with high-scale traffic and concurrency. Advanced proficiency in Python and modern AI/ML frameworks, including TensorFlow, PyTorch, Scikit-learn, and JAX. Hands-on expertise in at least two of the following domains: NLP, computer vision, time-series forecasting, or generative AI. Deep understanding of the end-to-end ML lifecycle, including data preprocessing, feature engineering, hyperparameter tuning, model evaluation, and deployment. Proven experience with cloud platforms (AWS, GCP, or Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, or Azure ML). Strong knowledge of containerization (Docker, Kubernetes) and RESTful API development for secure and scalable model deployment. Familiarity with secure coding practices, data privacy regulations, and techniques for safeguarding AI systems against adversarial attacks. Preferred Skills Expertise in MLOps frameworks and tools such as MLflow, Kubeflow, or SageMaker for streamlined model lifecycle management. Hands-on experience with large language models (LLMs) or generative AI frameworks (e.g., Hugging Face Transformers, LangChain, or Llama). Proficiency in big data technologies and orchestration tools (e.g., Apache Spark, Airflow, or Kafka) for handling massive datasets and real-time pipelines. Experience with distributed training techniques (e.g., Horovod, Ray, or TensorFlow Distributed) for large-scale model development. Knowledge of CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Ansible) for scalable and automated deployments. Familiarity with security frameworks and tools for AI systems, such as model hardening, differential privacy, or encrypted computation. Proven ability to work in global, client-facing roles, with strong communication skills to bridge technical and business teams. Share the CV on hr.mobilefirst@gmail.com/6355560672 Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position : Technical Lead Experience: 5+ Years Location: Pune Employment Type: Full-Time We are seeking a Technical Lead with deep technical acumen, strategic thinking, and a proven ability to design, architect, and guide the implementation of secure, scalable, and innovative systems. The ideal candidate will have a strong foundation in cloud-native development, quality, data architectures and CI/CD. You will work closely with stakeholders, engineering teams, and product managers to define architecture roadmaps and ensure technical alignment across projects Requirements Key Requirements: Must Have: 5+ years of experience in software architecture or equivalent senior technical leadership roles. Deep experience in cloud-native architecture with at least one major cloud (AWS, Azure, or GCP), ideally holding a Associate or Professional-level cloud certification . Strong hands-on background in Java/Python, Node JS and modern microservice frameworks (Flask, FastAPI, Celery). Proven ability to architect solutions involving structured and unstructured data . Solid knowledge of Relational and Non-relational databases such as PostgreSQL and MongoDB ; advanced data modelling and query optimization experience. Familiarity with Redis for caching. Knowledge of Kubernetes , containerization, and CI/CD pipelines, with an emphasis on DevOps best practices . Exposure to infrastructure as code (Terraform, CloudFormation) and experience designing reproducible, scalable environments. Excellent communication and stakeholder management skills, with the ability to articulate technical vision and mentor across teams. Optional: Experience architecting ML Ops pipelines and monitoring stacks (ELK, Prometheus/Grafana), including tools like mlflow, langfuse . Experience in GenAI frameworks (Langchain, LlamaIndex), vector databases (Milvus, ChromaDB), Agentic AI , Python libraries ( pandas, numpy, pyspark etc) and multi-component pipelines (MCP). Preferred Qualifications: Experience with event-driven and serverless architecture patterns. Strong understanding of security, compliance, and cost optimization in large-scale cloud environments. Benefits Work on cutting-edge technologies and impactful projects. Opportunities for career growth and development. Collaborative and inclusive work environment. Competitive salary and benefits package. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Working as an AI/ML Engineer at Navtech, you will: * Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly? * 2–4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a master’s degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. We’ll REALLY love you if you: * Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? * Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different geographies. About Us Navtech is a premier IT software and Services provider. Navtech’s mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. We’re a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Principal Consultant- Senior Data Engineer - Databricks, Azure & Mosaic AI Role Summary: We are seeking a Senior Data Engineer with extensive expertise in Data & Analytics platform modernization using Databricks, Azure, and Mosaic AI. This role will focus on designing and optimizing cloud-based data architectures, leveraging AI-driven automation to enhance data pipelines, governance, and processing at scale. Key Responsibilities: . Architect & modernize Data & Analytics platforms using Databricks on Azure. . Design and optimize Lakehouse architectures integrating Azure Data Lake, Databricks Delta Lake, and Synapse Analytics. . Implement Mosaic AI for AI-driven automation, predictive analytics, and intelligent data engineering solutions. . Lead the migration of legacy data platforms to a modern cloud-native Data & AI ecosystem. . Develop high-performance ETL pipelines, integrating Databricks with Azure services such as Data Factory, Synapse, and Purview. . Utilize MLflow & Mosaic AI for AI-enhanced data processing and decision-making. . Establish data governance, security, lineage tracking, and metadata management across modern data platforms. . Work collaboratively with business leaders, data scientists, and engineers to drive innovation. . Stay at the forefront of emerging trends in AI-powered data engineering and modernization strategies. Qualifications we seek in you! Minimum Qualifications . experience in Data Engineering, Cloud Platforms, and AI-driven automation. . Expertise in Databricks (Apache Spark, Delta Lake, MLflow) and Azure (Data Lake, Synapse, ADF, Purview). . Strong experience with Mosaic AI for AI-powered data engineering and automation. . Advanced proficiency in SQL, Python, and Scala for big data processing. . Experience in modernizing Data & Analytics platforms, migrating from on-prem to cloud. . Knowledge of Data Lineage, Observability, and AI-driven Data Governance frameworks. . Familiarity with Vector Databases & Retrieval-Augmented Generation (RAG) architectures for AI-powered data analytics. . Strong leadership, problem-solving, and stakeholder management skills. Preferred Skills: . Experience with Knowledge Graphs (Neo4J, TigerGraph) for data structuring. . Exposure to Kubernetes, Terraform, and CI/CD for scalable cloud deployments. . Background in streaming technologies (Kafka, Spark Streaming, Kinesis). Why join Genpact . Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities . Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Summary As a Data Scientist specializing in NLP, Generative AI, and Cloud technologies, you will be responsible for designing, developing, and deploying data extraction pipelines from documents. You will use state-of-the-art machine learning models, NLP techniques, and cloud platforms to improve automation, data quality, and overall decision-making processes. This role requires strong technical expertise, creative problem-solving, and hands-on experience with cloud technologies. Key Responsibilities Design and implement advanced NLP models to extract structured data from unstructured document formats (e.g., PDFs, Word, scanned images, emails, etc.). Leverage Generative AI techniques for data enhancement, content summarization, and document generation where necessary. Develop, fine-tune, and deploy machine learning models to enhance document understanding and automate data extraction processes. Collaborate with engineering teams to integrate NLP models into cloud-based data pipelines and workflows (AWS, Azure, or GCP). Build scalable and efficient data extraction workflows, ensuring high accuracy and performance of models. Conduct end-to-end data science activities, from data collection, cleaning, and exploration, to feature engineering and model deployment. Ensure the security, scalability, and compliance of data processing solutions deployed in the cloud. Evaluate and improve existing document extraction tools and processes, suggesting innovative solutions. Stay updated on the latest trends in NLP, Generative AI, and Cloud technologies, applying these advancements to enhance model performance and operational efficiency. Required Skills & Qualifications Minimum of 5 years of hands-on experience in Data Science, with a focus on NLP, machine learning, and AI. Strong proficiency in Python and libraries like SpaCy, NLTK, Hugging Face Transformers, and TensorFlow/PyTorch. Deep knowledge of document processing techniques, including OCR, text extraction, and document classification. Experience with Generative AI models (e.g., GPT, BERT) and their application in data extraction or document processing tasks. Expertise in cloud technologies (AWS, Azure, GCP) for building and deploying data-driven solutions. Proficiency in data manipulation and analysis using libraries like Pandas, NumPy, and SQL. Hands-on experience with model deployment frameworks and tools like Docker, Kubernetes, or MLFlow. Familiarity with version control (Git), CI/CD processes, and Agile development practices. Strong problem-solving skills, with the ability to design innovative solutions for complex document extraction challenges. Excellent communication skills and ability to work in cross-functional teams. Preferred Qualifications Master’s or PhD in Computer Science, Data Science, Artificial Intelligence, or related field. Experience with Large Language Models (LLMs) and advanced NLP techniques such as transfer learning and few-shot learning. Familiarity with document management systems (DMS) or enterprise content management (ECM) platforms. Experience with deploying and scaling machine learning models in production environments. Understanding of data privacy regulations and secure processing of sensitive information. Show more Show less

Posted 1 week ago

Apply

9.0 - 12.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, its the essential code youll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. Whats in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 9-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages - Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years experience with Elasticsearch, SQL, NoSQL,??Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor.

Posted 1 week ago

Apply

8.0 - 13.0 years

30 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Data Scientist Location: Onsite Bangalore Experience: 8+ years Role Overview We are seeking a Senior Data Scientist with a strong foundation in machine learning, deep learning, and statistical modeling, with the ability to translate complex operational problems into scalable AI/ML solutions. In addition to core data science responsibilities, the role involves building production-ready backends in Python and contributing to end-to-end model lifecycle management. Exposure to computer vision is a plus, especially for industrial use cases like identification, intrusion detection, and anomaly detection. Key Responsibilities Develop, validate, and deploy machine learning and deep learning models for forecasting, classification, anomaly detection, and operational optimization Build backend APIs using Python (FastAPI, Flask) to serve ML/DL models in production environments Apply advanced computer vision models (e.g., YOLO, Faster R-CNN) to object detection, intrusion detection, and visual monitoring tasks Translate business problems into analytical frameworks and data science solutions Work with data engineering and DevOps teams to operationalize and monitor models at scale Collaborate with product, domain experts, and engineering teams to iterate on solution design Contribute to technical documentation, model explainability, and reproducibility practices Required Skills Strong proficiency in Python for data science and backend development Experience with ML/DL libraries such as scikit-learn, TensorFlow, or PyTorch Solid knowledge of time-series modeling, forecasting techniques, and anomaly detection Experience building and deploying APIs for model serving (FastAPI, Flask) Familiarity with real-time data pipelines using Kafka, Spark, or similar tools Strong understanding of model validation, feature engineering, and performance tuning Ability to work with SQL and NoSQL databases, and large-scale datasets Good communication skills and stakeholder engagement experience Good to Have Experience with ML model deployment tools (MLflow, Docker, Airflow) Understanding of MLOps and continuous model delivery practices Background in aviation, logistics, manufacturing, or other industrial domains Familiarity with edge deployment and optimization of vision models Qualifications Masters or PhD in Data Science, Computer Science, Applied Mathematics, or related field 7+ years of experience in machine learning and data science, including end-to-end deployment of models in production

Posted 1 week ago

Apply

100.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General : Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills : Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in relational database (e.g.:- MS SQL Server) & NoSQL Databases (e.g.:- MongoDB) Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Above average verbal, written and presentation skills. Show more Show less

Posted 1 week ago

Apply

100.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Designation: MLOps Engineer Location: Kochi, India Experience: 5-8 years Qualification: B. Tech /MCA /BCA Timings: 10 AM to 7 PM (IST) Work Mode: Hybrid Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General: Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python and Shell Scripts. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in SQL and Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong skills in scripting languages like Python, Bash, or PowerShell. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Required: Proficiency and experience working with Relational Databases and SQL Scripting (MS SQL Server) Above average verbal, written and presentation skills. Show more Show less

Posted 1 week ago

Apply

2.0 - 7.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learning models used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored to business needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Document and communicate technical specifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis on explainability and responsible AI practices. Qualifications Required Educational Background : Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience : 2 - 11 Years of experience in machine learning, with 2+ years in designing and implementing generative AI models or working specifically with transformer-based models. Skills and Experience Required Generative AI : Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning : Algorithms, Deep Learning, Neural Networks Programming : Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, TensorFlow MLOps : Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) ? Data Engineering : Data Preprocessing, Feature Engineering, Data Cleaning Why you'll love working with us: BRING YOUR PASSION AND FUN . Corporate culture woven from highly diverse perspectives and insights. BALANCE WORK AND PERSONAL TIME LIKE A BOSS . Resources and flexibility to more easily integrate your work and your life. BECOME A CERTIFIED SMARTY PANTS . Ongoing training and development opportunities for even the most insatiable learner. START-UP SPIRIT (Good ten plus years, yet we maintain it) FLEXIBLE WORKING HOURS

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

What You'll Do We are looking for experienced Machine Learning Engineers with a background in software development and a deep enthusiasm for solving complex problems. You will lead a dynamic team dedicated to designing and implementing a large language model framework to power diverse applications across Avalara. Your responsibilities will span the entire development lifecycle, including conceptualization, prototyping and delivery of the LLM platform features. You will build core agent infrastructure—A2A orchestration and MCP-driven tool discovery—so teams can launch secure, scalable agent workflows. You will be reporting to Senior Manager, Machine Learning What Your Responsibilities Will Be We are looking for engineers who can think quick and have a background in implementation. Your responsibilities will include: Build on top of the foundational framework for supporting Large Language Model Applications at Avalara Experience with LLMs - like GPT, Claude, LLama and other Bedrock models Leverage best practices in software development, including Continuous Integration/Continuous Deployment (CI/CD) along with appropriate functional and unit testing in place. Promote innovation by researching and applying the latest technologies and methodologies in machine learning and software development. Write, review, and maintain high-quality code that meets industry standards, contributing to the project's. Lead code review sessions, ensuring good code quality and documentation. Mentor junior engineers, encouraging a culture of collaboration Proficiency in developing and debugging software with a preference for Python, though familiarity with additional programming languages is valued and encouraged. What You’ll Need To Be Successful 6+ years of experience building Machine Learning models and deploying them in production environments as part of creating solutions to complex customer problems. Proficiency working in cloud computing environments (AWS, Azure, GCP), Machine Learning frameworks, and software development best practices. Experience working with technological innovations in AI & ML(esp. GenAI) and apply them. Experience with design patterns and data structures. Good analytical, design and debugging skills. Technologies You Will Work With Python, LLMs, Agents, A2A, MCP, MLFlow, Docker, Kubernetes, Terraform, AWS, GitLab, Postgres, Prometheus, and Grafana We are the AI & ML enablement group in Avalara. This is a remote position. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re Avalara. We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing nearly 40 billion customer API calls and over 5 million tax returns a year, and this year we became a billion-dollar business . Our growth is real, and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. Ownership and achievement go hand in hand here. We instill passion in our people through the trust we place in them. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Responsibilities ML engineer: 3-5 years of experience as AI/ML engineer or similar role. Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with model development and deployment processes. Proficiency in programming languages such as Python. Experience with data preprocessing, feature engineering, and model evaluation techniques. Familiarity with cloud platforms (e.g., AWS) and containerization (e.g., Docker, Kubernetes). Familiarity with version control systems (e.g., GitHub). Proficiency in data manipulation and analysis using libraries such as NumPy and Pandas. Good to have knowledge of deep learning, ML Ops: Kubeflow, MLFlow, Nextflow. Knowledge on text Analytics, NLP, Gen AI Mandatory Skill Sets Gen AI, Python Preferred Skill Sets Gen AI, Python Years Of Experience Required 3 - 5 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Build and fine-tune Generative AI models (LLMs, diffusion models, etc.) for various applications. Work with agent and multi-agent frameworks to build task-specific or collaborative AI systems. Develop and deploy ML pipelines for training, inference, and evaluation. Collaborate with cross-functional teams (Product, Data Engineering, DevOps) to integrate ML models into products. Conduct data preprocessing, exploratory analysis, and feature engineering. Stay updated with state-of-the-art research in ML/GenAI and apply it to practical problems. Optimize models for performance, scalability, and efficiency. Work with APIs like OpenAI, AZURE OpenAI, and others for rapid prototyping and deployment. Contribute to internal tools and frameworks to support ML experimentation and monitoring. ________________________________________ Required Skills & Qualifications Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or related field. 3 to 5 years of hands-on experience in Machine Learning and/or NLP projects. Proficiency in Python and popular ML libraries (e.g., PyTorch, TensorFlow, Hugging Face Transformers). Practical experience with agent and/or multi-agent frameworks (e.g., LangGraph, CrewAI, AutoGen, AutoGPT, BabyAGI, etc.) is highly desirable. Experience working with LLMs (GPT, Claude, etc.). Familiarity with prompt engineering, RAG (Retrieval-Augmented Generation), and fine-tuning techniques. Strong understanding of data structures, algorithms, and ML concepts. Experience in deploying models using tools like Docker, FastAPI, Flask, or MLflow. Knowledge of cloud platforms (AWS, GCP, or Azure) is a plus. Experience with vector databases (e.g., PG Vector, Pinecone, Weaviate). Knowledge of MLOps tools (e.g., MLflow, Kubeflow, Airflow). Publications or contributions to open-source projects in ML/GenAI. Familiarity with ethical AI principles and responsible AI practices Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Python Developer Location: Bangalore, India Experience Level: 2–3 years Budget - 6-7LPA About Us We are a fast-growing startup revolutionizing warehouse industry inventory scanning. Our team is agile, curious, and deeply passionate about building reliable and scalable systems across diverse domains—from firmware and test automation to MLOps and configuration infrastructure. Role Overview We’re looking for a Python Engineer who thrives in dynamic environments and enjoys working across functional boundaries. You’ll contribute to mission-critical components of our platform—from developing test automation frameworks to enabling scalable MLOps pipelines and improving firmware tooling. This role offers exposure to a wide breadth of technical challenges and the opportunity to learn and grow rapidly. What You'll Do Build and maintain Python tools and services across MLOps, configuration management, firmware interaction, and test automation. Collaborate with ML engineers, firmware developers, DevOps, and QA to streamline development and deployment workflows. Write clean, modular, and well-documented code with a focus on scalability and reliability. Own initiatives end-to-end, from problem definition to deployment and monitoring. Be part of architectural discussions and help shape engineering best practices in a fast-paced environment. What We're Looking For 2–5 years of hands-on Python experience in production environments. Strong CS fundamentals and experience with Git, CI/CD, and containerization tools (e.g., Docker). Exposure to any of the following areas: MLOps tooling (e.g., MLflow, DVC, Airflow, FastAPI) Firmware scripting and diagnostics tools Configuration and infrastructure-as-code (e.g., YAML, JSON, Ansible) Test automation frameworks (e.g., pytest, unittest, Selenium, or hardware-in-the-loop systems) You’re a Great Fit If You Are Adaptable– Comfortable switching between tasks and learning new domains quickly. A fast learner – Able to pick up new tools, languages, and frameworks with minimal guidance. A team player – Eager to collaborate with people across software, hardware, and data teams. Detail-oriented – Careful about edge cases, logs, and testing, especially when writing infrastructure-facing code. Interested candidates kindly share your updated resume to anushka@adeptglobal.com Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

India

On-site

GlassDoor logo

About Kinaxis: About Kinaxis Elevate your career journey by embracing a new challenge with Kinaxis. We are experts in tech, but it’s really our people who give us passion to always seek ways to do things better. As such, we’re serious about your career growth and professional development, because People matter at Kinaxis. In 1984, we started out as a team of three engineers based in Ottawa, Canada. Today, we have grown to become a global organization with over 2000 employees around the world, and support 40,000+ users in over 100 countries. As a global leader in end-to-end supply chain management, we enable supply chain excellence for all industries. We are expanding our team in Chennai and around the world as we continue to innovate and revolutionize how we support our customers. Our journey in India began in 2020 and we have been growing steadily since then! Building a high-trust and high-performance culture is important to us and we are proud to be Great Place to Work® CertifiedTM. Our state-of-the-art office, located in the World Trade Centre in Chennai, offers our growing team space for expansion and collaboration. About the team: Location Chennai, India About the team The team is responsible for applying machine learning algorithms to develop intelligent supply chains. The uniqueness of the team is that it performs at the intersection of technology and real business problems. You will contribute to the product that delights customers world-wide! About the role: What you will do If you love solving complex problems, analyzing complex datasets, finding insights from data, creating data model and learning new technologies, this role is for you. As a senior software developer, you are passionate about shipping large-scale software systems in a fast-paced environment but can balance longer term issues such as maintainability, scalability, and quality. You are an experienced software engineer who is passionate about delivering software that supports and facilitates business operations of ML & AI solutions. You have a strong understanding of Cloud technologies and Cloud agnostic software architecture and have experience troubleshooting high scale solutions that are deployed and upgraded on a regular cadence. You have a passion for software reliability and know how to ensure user needs are met through cross-functional stakeholder understanding and engagement. You enjoy understanding both the details of the use cases that end-users are performing using the solution as well as the architecture and implementation of the system end to end. You have a strong interest in resolving issues as well as designing effective methods for troubleshooting, preventing, and debugging problems in software systems, getting to the root cause of issues, meeting the users’ needs and influencing the product development roadmap. You are excited about finding ways to develop product capabilities and tools that increase robustness of the user experience, reduce the cost of troubleshooting, or reduce the time required to address issues. You are fluent in Python, have experience working with distributed computing, big data frameworks and are very knowledgeable about Kubernetes and Docker. You also have experience working with and building Machine Learning pipelines and models. You have the ability and enthusiasm to learn new technologies whether they are infrastructure or language or platform,and easily adapt to change. You excel as a team player, a quick starter, and a problem solver. You thrive in cross-functional teams, actively listening and contributing to discussions. Your expertise lies in engineering solutions for complex machine learning challenges, developing Python-based applications, containerizing apps with Docker, orchestrating container swarms in Kubernetes, and building Argo Workflows. These efforts play a key role in creating ML software systems that deliver critical value to the business and its customers. What we are looking for We're looking for a highly experienced and self-driven Senior Machine Learning or Data Science Developer to join our growing team. This role requires a strong mix of data science expertise, software engineering skills, and problem-solving ability to build scalable, production-grade solutions. Education & Experience: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or a related field, or equivalent practical experience. 10–15 years of relevant industry experience. At least 5 years of hands-on experience in building and deploying Machine Learning or Data Science solutions in a production environment. Technologies we use Strong programming skills in Python, with deep expertise in Pandas, NumPy, and major ML libraries (Scikit-learn, XGBoost, LightGBM, etc.). Experience with developing REST APIs using frameworks such as Flask or FastAPI. Solid experience working with Docker, Kubernetes, Helm, and Argo Workflows in production environments. Experience with CI/CD pipelines, preferably using GitHub Actions or similar tools. Proficiency in version control systems like Git/GitHub. Hands-on experience working with cloud platforms such as Azure, GCP, or AWS. Experience with distributed computing frameworks like Spark, and platforms like Azure Databricks or AWS EMR. Machine Learning Expertise: Solid understanding of the end-to-end ML lifecycle: data preparation, feature engineering, model training, hyperparameter tuning, model evaluation, and deployment. Experience working with structured and unstructured data, large-scale datasets, and optimizing pipelines for performance and scalability. Ability to write modular, testable, and maintainable code for ML workflows. Soft Skills: Excellent written and verbal communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Proven problem-solving skills and a passion for debugging and troubleshooting complex issues. Strong organizational skills with an ability to manage multiple tasks/projects simultaneously. Nice to Have Domain knowledge in Supply Chain, especially in Demand Planning, CPG, or Manufacturing industries. Understanding of how business drivers (e.g., pricing, promotions, seasonality, weather patterns) influence demand. Experience working with time series forecasting or predictive modeling in operational contexts. Familiarity with MLFlow, DVC, or other ML experiment tracking and versioning tools. Exposure to ML model monitoring, or model explainability frameworks. #Intermediate #Full-time #LI-RJ1 Why join Kinaxis?: Work With Impact: Our platform directly helps companies power the world’s supply chains. We see the results of what we do out in the world every day—when we see store shelves stocked, when medications are available for our loved ones, and so much more. Work with Fortune 500 Brands: Companies across industries trust us to help them take control of their integrated business planning and digital supply chain. Some of our customers include Ford, Unilever, Yamaha, P&G, Lockheed-Martin, and more. Social Responsibility at Kinaxis: Our Diversity, Equity, and Inclusion Committee weighs in on hiring practices, talent assessment training materials, and mandatory training on unconscious bias and inclusion fundamentals. Sustainability is key to what we do and we’re committed to net-zero operations strategy for the long term. We are involved in our communities and support causes where we can make the most impact. People matter at Kinaxis and these are some of the perks and benefits we created for our team: Flexible vacation and Kinaxis Days (company-wide day off on the last Friday of every month) Flexible work options Physical and mental well-being programs Regularly scheduled virtual fitness classes Mentorship programs and training and career development Recognition programs and referral rewards Hackathons For more information, visit the Kinaxis web site at www.kinaxis.com or the company’s blog at http://blog.kinaxis.com. Kinaxis welcomes candidates to apply to our inclusive community. We provide accommodations upon request to ensure fairness and accessibility throughout our recruitment process for all candidates, including those with specific needs or disabilities. If you require an accommodation, please reach out to us at recruitmentprograms@kinaxis.com. Please note that this contact information is strictly for accessibility requests and cannot be used to inquire about application statuses. Kinaxis is committed to ensuring a fair and transparent recruitment process. We use artificial intelligence (AI) tools in the initial step of the recruitment process to compare submitted resumes against the job description, to identify candidates whose education, experience and skills most closely match the requirements of the role. After the initial screening, all subsequent decisions regarding your application, including final selection, are made by our human recruitment team. AI does not make any final hiring decisions.

Posted 1 week ago

Apply

Exploring mlflow Jobs in India

The mlflow job market in India is rapidly growing as companies across various industries are increasingly adopting machine learning and data science technologies. mlflow, an open-source platform for the machine learning lifecycle, is in high demand in the Indian job market. Job seekers with expertise in mlflow have a plethora of opportunities to explore and build a rewarding career in this field.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

These cities are known for their thriving tech industries and have a high demand for mlflow professionals.

Average Salary Range

The average salary range for mlflow professionals in India varies based on experience: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Salaries may vary based on factors such as location, company size, and specific job requirements.

Career Path

A typical career path in mlflow may include roles such as: 1. Junior Machine Learning Engineer 2. Machine Learning Engineer 3. Senior Machine Learning Engineer 4. Tech Lead 5. Machine Learning Manager

With experience and expertise, professionals can progress to higher roles and take on more challenging projects in the field of machine learning.

Related Skills

In addition to mlflow, professionals in this field are often expected to have skills in: - Python programming - Data visualization - Statistical modeling - Deep learning frameworks (e.g., TensorFlow, PyTorch) - Cloud computing platforms (e.g., AWS, Azure)

Having a strong foundation in these related skills can further enhance a candidate's profile and career prospects.

Interview Questions

  • What is mlflow and how does it help in the machine learning lifecycle? (basic)
  • Explain the difference between tracking, projects, and models in mlflow. (medium)
  • How do you deploy a machine learning model using mlflow? (medium)
  • Can you explain the concept of model registry in mlflow? (advanced)
  • What are the benefits of using mlflow in a machine learning project? (basic)
  • How do you manage experiments in mlflow? (medium)
  • What are some common challenges faced when using mlflow in a production environment? (advanced)
  • How can you scale mlflow for large-scale machine learning projects? (advanced)
  • Explain the concept of artifact storage in mlflow. (medium)
  • How do you compare different machine learning models using mlflow? (medium)
  • Describe a project where you successfully used mlflow to streamline the machine learning process. (advanced)
  • What are some best practices for versioning machine learning models in mlflow? (advanced)
  • How does mlflow support hyperparameter tuning in machine learning models? (medium)
  • Can you explain the role of mlflow tracking server in a machine learning project? (medium)
  • What are some limitations of mlflow that you have encountered in your projects? (advanced)
  • How do you ensure reproducibility in machine learning experiments using mlflow? (medium)
  • Describe a situation where you had to troubleshoot an issue with mlflow and how you resolved it. (advanced)
  • How do you manage dependencies in a mlflow project? (medium)
  • What are some key metrics to track when using mlflow for machine learning experiments? (medium)
  • Explain the concept of model serving in the context of mlflow. (advanced)
  • How do you handle data drift in machine learning models deployed using mlflow? (advanced)
  • What are some security considerations to keep in mind when using mlflow in a production environment? (advanced)
  • How do you integrate mlflow with other tools in the machine learning ecosystem? (medium)
  • Describe a situation where you had to optimize a machine learning model using mlflow. (advanced)

Closing Remark

As you explore opportunities in the mlflow job market in India, remember to continuously upskill, stay updated with the latest trends in machine learning, and showcase your expertise confidently during interviews. With dedication and perseverance, you can build a successful career in this dynamic and rapidly evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies