Home
Jobs

8 Lightgbm Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

AI/ML Models: Experience with Automated Valuation Models (AVM) and real-world deployment of machine learning models LangChain: Proficient in using LangChain for building LLM-powered applications Data Science Toolkit: Hands-on with Pandas, NumPy, Scikit-learn, XGBoost, LightGBM, and Jupyter Feature Engineering: Strong background in feature engineering and data intelligence extraction Data Handling: Comfortable with structured, semi-structured, and unstructured data Production Integration: Experience integrating models into APIs and production environments using Python-based frameworks Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad

Posted 2 weeks ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary: We are looking for a highly skilled and experienced AI/ML Developer with 3-4 years of hands-on experience to join our technology team. You will be responsible for designing, developing, and optimizing machine learning models that drive intelligent business solutions. The role involves close collaboration with cross-functional teams to deploy scalable AI systems and stay abreast of evolving trends in artificial intelligence and machine learning. Key Responsibilities: 1. Develop and Implement AI/ML Models Design, build, and implement AI/ML models tailored to solve specific business challenges, including but not limited to natural language processing (NLP), image recognition, recommendation systems, and predictive analytics. 2. Model Optimisation and Evaluation Continuously improve existing models for performance, accuracy, and scalability. 3. Data Preprocessing and Feature Engineering Collect, clean, and preprocess structured and unstructured data from various sources. Engineer relevant features to improve model performance and interpret ability. 4. Collaboration and Communication Collaborate closely with data scientists, back end engineers, product managers, and stakeholders to align model development with business goals. Communicate technical insights clearly to both technical and non-technical stakeholders. 5. Model Deployment and Monitoring Deploy models to production using MLOps practices and tools (e.g., MLflow, Docker, Kubernetes). Monitor live model performance, diagnose issues, and implement improvements as needed. 6. Staying Current with AI/ML Advancements Stay informed of current research, tools, and trends in AI and machine learning. Evaluate and recommend emerging technologies to maintain innovation within the team. 7. Code Reviews and Best Practices Participate in code reviews to ensure code quality, scalability, and adherence to best practices. Promote knowledge sharing and mentoring within the development team. Required Skills and Qualifications: Bachelors or Masters degree in computer science, Data Science, Engineering, or a related field. 3-4 years of experience in machine learning, artificial intelligence, or applied data science roles. Strong programming skills in Python (preferred) and/or R. Proficiency in ML libraries and frameworks , including: scikit-learn, XGBoost, LightGBM, TensorFlow or Keras, PyTorch Skilled in data preprocessing and feature engineering , using; pandas, numpy, sklearn.preprocessing Practical experience in deploying ML models into production environments using REST APIs and containers. Familiarity with version control systems (e.g., Git) and containerization tools (e.g., Docker). Experience working with cloud platforms such as AWS, Google Cloud Platform (GCP), or Azure . Understanding software development methodologies , especially Agile/Scrum. Strong analytical thinking, debugging, and problem-solving skills in real-world AI/ML applications.

Posted 3 weeks ago

Apply

1 - 3 years

4 - 8 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Key Responsibilities: Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL.. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems. Develop tools and scripts to support model monitoring, logging, and retraining workflows. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring. Contribute to the development of a feature store and standardized processes to support reproducible data science. Required Skills & Experience: 13 years of hands-on experience in Python programming for data science or ML engineering tasks. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc. Familiarity with ETL pipelines and data transformation best practices. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins. Strong analytical mindset with the ability to interpret and debug data/model issues. Preferred Qualifications: Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar. Knowledge of ML lifecycle tools (e.g., MLflow, Ray). Familiarity with cloud platforms (AWS preferred) and scalable infrastructure. Experience with data or model versioning tools and feature engineering frameworks.

Posted 1 month ago

Apply

1 - 3 years

4 - 8 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Key Responsibilities: Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL.. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems. Develop tools and scripts to support model monitoring, logging, and retraining workflows. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring. Contribute to the development of a feature store and standardized processes to support reproducible data science. Required Skills & Experience: 13 years of hands-on experience in Python programming for data science or ML engineering tasks. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc. Familiarity with ETL pipelines and data transformation best practices. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins. Strong analytical mindset with the ability to interpret and debug data/model issues. Preferred Qualifications: Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar. Knowledge of ML lifecycle tools (e.g., MLflow, Ray). Familiarity with cloud platforms (AWS preferred) and scalable infrastructure. Experience with data or model versioning tools and feature engineering frameworks.

Posted 1 month ago

Apply

7 - 10 years

20 - 35 Lacs

Hyderabad, Bengaluru

Hybrid

Naukri logo

Role & responsibilities We are seeking an experienced and technically strong Machine Learning Engineer to design, implement, and operationalize ML models across Google Cloud Platform (GCP) and Microsoft Azure. The ideal candidate will have a robust foundation in machine learning algorithms, MLOps practices, and experience deploying models into scalable cloud environments. Responsibilities: Design, develop, and deploy machine learning solutions for use cases in prediction, classification, recommendation, NLP, and time series forecasting. Translate data science prototypes into production-grade, scalable models and pipelines. Implement and manage end-to-end ML pipelines using: Azure ML (Designer, SDK, Pipelines), Data Factory, and Azure Databricks Vertex AI (Pipelines, Workbench), BigQuery ML, and Dataflow Build and maintain robust MLOps workflows for versioning, retraining, monitoring, and CI/CD using tools like MLflow, Azure DevOps, and GCP Cloud Build. Optimize model performance and inference using techniques like hyperparameter tuning, feature selection, model ensembling, and model distillation. Use and maintain model registries, feature stores, and ensure reproducibility and governance. Collaborate with cloud architects, and software engineers to deliver ML-based solutions. Maintain and monitor model performance in production using Azure Monitor, Prometheus, Vertex AI Model Monitoring, etc. Document ML workflows, APIs, and system design for reusability and scalability. Primary Skills required (Must Have Expereince): 5 -7 years of experience in machine learning engineering or applied ML roles. Advanced proficiency in Python, with strong knowledge of libraries such as Scikit-learn, Pandas, NumPy, XGBoost, LightGBM, TensorFlow, PyTorch. Solid understanding of core ML concepts: supervised/unsupervised learning, cross-validation, bias-variance tradeoff, evaluation metrics (ROC-AUC, F1, MSE, etc.). Hands-on experience deploying ML models using: Azure ML (Endpoints, SDK), AKS, ACI Vertex AI (Endpoints, Workbench), Cloud Run, GKE Familiarity with cloud-native tools for storage, compute, and orchestration: Azure Blob Storage, ADLS Gen2, Azure Functions GCP Storage, BigQuery, Cloud Functions Experience with containerization and orchestration (Docker, Kubernetes, Helm). Strong understanding of CI/CD for ML, model testing, reproducibility, and rollback strategies. Experience implementing drift detection, model explainability (SHAP, LIME), and responsible AI practices.

Posted 1 month ago

Apply

3 - 7 years

20 - 35 Lacs

Pune

Hybrid

Naukri logo

Roles and Responsibility : As a Data and Applied Scientist, you will work with Pattern's Data Science team to curate and analyze data and apply machine learning models and statistical techniques to optimize advertising spend on ecommerce platforms. What you'll do: Design, build, and maintain machine learning and statistical models to optimize advertising campaigns to improve search visibility and conversion rates on ecommerce platforms. Continuously optimize the quality of our machine learning models, especially for key metrics like search ranking, keyword bidding, CTR and conversion rate estimation Conduct research to integrate new data sources, innovate in feature engineering, fine-tuning algorithms, and enhance data pipelines for robust model performance. Analyze large datasets to extract actionable insights that guide advertising decisions. Work closely with teams across different regions (US and India), ensuring seamless collaboration and knowledge sharing. Dedicate 20% of time to MLOps for efficient, reliable model deployment and operations. What were looking for: Bachelor's or Master's in Data Science, Computer Science, Statistics, or a related field. 3-7 years of industry experience in building and deploying machine learning solutions. Strong data manipulation and programming skills in Python and SQL and hands-on experience with libraries such as Pandas, Numpy, Scikit-Learn, XGBoost. Strong problem-solving skills and an ability to analyze complex data. In depth expertise in a range of machine learning and statistical techniques such as linear and tree-based models along with understanding of model evaluation metrics. Experience with Git, AWS, Docker, and MLFlow is advantageous. Additional Pluses: Portfolio: An active Kaggle or Github profile showcasing relevant projects. Domain Knowledge: Familiarity with advertising and ecommerce concepts, which would help in tailoring models to business needs.

Posted 3 months ago

Apply

5 - 7 years

14 - 16 Lacs

Pune, Bengaluru, Gurgaon

Work from Office

Naukri logo

Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)

Posted 3 months ago

Apply

1 - 3 years

15 - 25 Lacs

Noida

Hybrid

Naukri logo

Job Description Purpose of the role To design, develop, implement, and support mathematical, statistical, and machine learning models and analytics used in business decision-making Accountabilities Design analytics and modelling solutions to complex business problems using domain expertise. Collaboration with technology to specify any dependencies required for analytical solutions, such as data, development environments and tools. Development of high performing, comprehensively documented analytics and modelling solutions, demonstrating their efficacy to business users and independent validation teams. Implementation of analytics and models in accurate, stable, well-tested software and work with technology to operationalise them. Provision of ongoing support for the continued effectiveness of analytics and modelling solutions to users. Demonstrate conformance to all Barclays Enterprise Risk Management Policies, particularly Model Risk Policy. Ensure all development activities are undertaken within the defined control environment. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Additional Job Description Join us as a "Data Scientist" in Group Control Quantitative Analytics team at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. Group Control Quantitative Analytics (GC QA) is a global organization of highly specialized data scientists working on Machine Learning model development and model management including governance and monitoring. GC QA is led by Lee Gregory, who is Chief Data and Analytics Officer (CDAO) in Group Control. GC QA is responsible for developing and managing machine learning models (including governance and regular model monitoring ) and providing analytical support across different areas including Fraud, Financial Crime, Controls, Security etc. within Barclays. The Data Scientist" position provides project specific leadership in building targeting solutions that integrate effectively into existing systems and processes while delivering strong and consistent performance. Working with GC CDAO team, the Quantitative Analytics Data Scientist role provides expertise in project design, predictive model development, validation, monitoring, tracking and implementation. To be successful as a "Data Scientist" in Group Control Quantitative Analytics team, you should have experience with: Coding using Python. Machine Learning algorithms. SQL Distributed computing using Spark/PySpark. Predictive Model development. Model lifecycle and model management including monitoring. DevOps tools like Git/Bitbucket etc. Project management using JIRA. Some other highly valued skills may include: DevOps tools Teamcity, Jenkins etc. Knowledge in Fraud and Financial Crime domain. Knowledge of GenAI tools and working. DataBricks You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills. Location: Noida.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies