Jobs
Interviews

629 Xgboost Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

24 Lacs

Bharūch

On-site

Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction IBM Infrastructure division builds Servers, Storage, Systems and Cloud Software which are the building blocks for next-generation IT infrastructure of enterprise customers and data centers. IBM Servers provide best-in-class reliability, scalability, performance, and end-to-end security to handle mission-critical workloads and provide seamless extension to hybrid multicloud environments. India Systems Development Lab (ISDL) is part of word-wide IBM Infrastructure division. Established in 1996, the ISDL Lab is headquartered in Bengaluru, with presence in Pune and Hyderabad as well. ISDL teams work across the IBM Systems stack including Processor development (Power and IBM Z), ASCIs, Firmware, Operating Systems, Systems Software, Storage Software, Cloud Software, Performance & Security Engineering, System Test etc. The lab also focuses on innovations, thanks to the creative energies of the teams. The lab has contributed over 400+ patents in cutting edge technologies and inventions so far. ISDL teams also ushered in new development models such as Agile, Design Thinking and DevOps. Your Role And Responsibilities As a Software Engineer at IBM India Systems Development Lab (IBM ISDL), you will get an opportunity to work on all the phases of product development (Design/Development, Test and Support) across core Systems technologies including Operating Systems, Firmware, Systems Software, Storage Software & Cloud Software. As a Software Developer At ISDL: You will be focused on development of IBM Systems products interfacing with development & product management teams and end users, cutting across geos. You would analyze product requirements, determine the best course of design, implement/code the solution and test across the entire product development life cycle. One could also work on Validation and Support of IBM Systems products. You get to work with a vibrant, culture driven and technically accomplished teams working to create world-class products and deployment environments, delivering an industry leading user experience for our customers. You will be valued for your contributions in a growing organization with broader opportunities. At ISDL, work is more than a job - it's a calling: To build. To design. To code. To invent. To collaborate. To think along with clients. To make new products/markets. Not just to do something better, but to attempt things you've never thought was possible. Are you ready to lead in this new era of technology and solve some of the most challenging problems in Systems Software technologies? If so, let’s talk. Required Technical And Professional Expertise Required Technical Expertise: Knowledge of Operating Systems, OpenStack, Kubernetes, Container technologies, Cloud concepts, Security, Virtualization Management, REST API, DevOps (Continuous Integration) and Microservice Architecture. Strong programming skills in C, C++, Go Lang, Python, Ansible, Shell Scripting. Comfortable in working with Github and leveraging Open source tools. AI Software Engineer: As a Software Engineer with IBM AI on Z Solutions teams, you will get the opportunity to get involved in delivering best-in class Enterprise AI Solutions on IBM Z and support IBM Customers while adopting AI technologies / Solutions into their businesses by building ethical, secure, trustworthy and sustainable AI solutions on IBM Z. You will be part of end to end solutions working along with technically accomplished teams. You will be working as a Full stack developer starting from understanding client challenges to providing solutions using AI. Required Technical Expertise: Knowledge of AI/ML/DL, Jupyter Notebooks, Linux Systems, Kubernetes, Container technologies, REST API, UI skills, Strong programming skills like – C, C++, R, Python, Go Lang and well versed with Linux platform. Strong understanding of Data Science, modern tools and techniques to derive meaningful insights Understanding of Machine learning (ML) frameworks like scikit- learn, XGBoost etc. Understanding of Deep Learning (DL) Frameworks like Tensorflow, PyTorch Understanding of Deep Learning Compilers (DLC) Natural Language Processing (NLP) skills Understanding of different CPU architectures (little endian, big endian). Familiar with open source databases PostGreSQL, MongoDB, CouchDB, CockroachDB, Redis, data sources, connectors, data preparations, data flows, Integrate, cleanse and shape data. IBM Storage Engineer: As a Storage Engineer Intern in a Storage Development Lab you would support the design, testing, and validation of storage solutions used in enterprise or consumer products. This role involves working closely with hardware and software development teams to evaluate storage performance, ensure data integrity, and assist in building prototypes and test environments. The engineer contributes to the development lifecycle by configuring storage systems, automating test setups, and analyzing system behavior under various workloads. This position is ideal for individuals with a foundational understanding of storage technologies and a passion for hands-on experimentation and product innovation. Preferred Technical Expertise: Practical working experience with Java, Python, GoLang, ReactJS, Knowledge of AI/ML/DL, Jupyter Notebooks, Storage Systems, Kubernetes, Container technologies, REST API, UI skills, Exposure to cloud computing technologies such as Red Hat OpenShift, Microservices Architecture, Kubernetes/Docker Deployment. Basic understanding of storage technologies: SAN, NAS, DAS Familiarity with RAID levels and disk configurations Knowledge of file systems (e.g., NTFS, ext4, ZFS) Experience with operating systems: Windows Server, Linux/Unix Basic networking concepts: TCP/IP, DNS, DHCP Scripting skills: Bash, PowerShell, or Python (for automation) Understanding of backup and recovery tools (e.g., Veeam, Commvault) Exposure to cloud storage: AWS S3, Azure Blob, or Google Cloud Storage Linux Developer: As a Linux developer, you would be involved in design and development of advanced features in the Linux OS for the next generation server platforms from IBM by collaboration with the Linux community. You collaborate with teams across the hardware, firmware, and upstream Linux kernel community to deliver these capabilities. Preferred Technical Expertise Excellent knowledge of the C programming language Knowledge of Linux Kernel internals and implementation principles. In-depth understanding of operating systems concepts, data structures, processor architecture, and virtualization Experience with working on open-source software using tools such git and associated community participation processes. Hardware Management Console (HMC) / Novalink Software Developer: As a Software Developer in HMC / Novalink team, you will work on design, development, and test of the Management Console for IBM Power Servers. You will be involved in user centric Graphical User Interface development and Backend for server and virtualization management solution development in Agile environment. Preferred Technical Expertise Strong Programming skills in in Core Java 8, C/C++ Web development skills in JavaScript (Frameworks such as Angular.js, React.js etc),, HTML, CSS and related technologies Experience in developing rich HTML applications Web UI Frameworks: Vaadin, React JS and UI styling libraries like Bootstrap/Material Knowledge of J2EE, JSP, RESTful web services and GraphQL API AIX Developer: AIX is a proprietary Unix operating system which runs on IBM Power Servers. It’s a secure, scalable, and robust open standards-based UNIX operating system which is designed to meet the needs of Enterprises class infrastructure. As an AIX developer, you would be involved in development, test or support of AIX OS features development or open source software porting/development for AIX OS Preferred Technical Expertise Strong Expertise in Systems Programming Skills (C, C++) Strong knowledge of operating systems concepts, data structures, algorithms Strong knowledge of Unix/Linux internals (Signals, IPC, Shared Memory,..etc) Expertise in developing/handling multi-threaded Applications. Good knowledge in any of the following areas User Space Applications File Systems, Volume Management Device Drivers Unix Networking, Security Container Technologies Linkers/Loaders Virtualization High Availability & clustering products Strong debugging and Problem-Solving skills Performance Engineer: As a performance Engineer , you will get an opportunity to conduct experiments and analysis to identify performance aspects for operating systems and Enterprise Servers. where you will be responsible for advancing the product roadmap by using your expertise in Linux operating system, building kernel , applying patches, performance characterization, optimization and hardware architecture to analyse performance of software/hardware combinations. You will be involved in conducting experiments and analysis to identify performance challenges and uncover optimization opportunities for IBM Power virtualization and cloud management software built on Open stack. The areas of work will be on characterization, analysis and fine-tune application software to help deliver optimal performance on IBM Power. Preferred Technical Expertise Experience in C/C++ programming Knowledge of Hypervisor, Virtualization concepts Good understanding of system HW , Operating System , Systems Architecture Strong skills in scripting Good problem solving, strong analytical and logical reasoning skills Familiar with server performance management and capacity planning Familiar with performance diagnostic methods and techniques Firmware Engineer: As a Firmware developer you will be responsible for designing and developing components and features independently in IBM India Systems Development Lab. ISDL works on end-to-end design, development across Power, Z and Storage portfolio. You would be a part of WW Firmware development organization and would be involved in designing & developing cutting edge features on the open source OpenBMC stack (https://github.com/openbmc/) and developing the open source embedded firmware code for bringing up the next generation enterprise Power, Z and LinuxONE Servers. You will get an opportunity work alongside with some of the best minds in the industry, forum and communities in the process of contributing to the portfolio. Preferred Technical Expertise Strong System Architecture knowledge Hands on programming skills with C, C++ , C on Linux Distros. Experience/exposure in Firmware/Embedded software design & development, Strong knowledge of Linux OS and Open Source development Experience with Open Source tools & scripting languages: Git, Gerrit, Jenkins, perl/python Other Skills (Common For All The Positions): Strong Communication, analytical, interpersonal & problem solving skills Ability to deliver on agreed goals and the ability to coordinate activities in the team/collaborate with others to deliver on the team vision. Ability to work effectively in a global team environment Enterprise System Design Software Engineer: The Enterprise Systems Design team is keen on hiring passionate Computer science and engineering graduates / Masters students, who can blend their architectural knowledge and programming skills to build the complex infrastructure geared to work for the Hybrid cloud and AI workloads. We have several opportunities in following areas of System & chip development team : Processor verification engineer Needs to develop the test infrastructure to verify the architecture and functionality of the IBM server processors/SOC or ASICs. Will be responsible to creatively think of all the scenarios to test and report the coverage. Work with design as well as other key stakeholders in identifying /debugging & Resolving logic design issues and deliver a quality design Processor Pre / Post silicon validation engineer As a validation engineer you would design and develop algorithms for Post Silicon Validation of next generation IBM server processors, SOCs and ASICs. Electronic design automation – Front & BE tool development. EDA tools development team is responsible for developing state of the art Front End verification , simulation , Formal verification tools , Place & Route, synthesis tools and Flows critical for designing & verifying high performance hardware design for IBM's next generation Systems (IBM P and Z Systems) which is used in Cognitive, ML, DL, and Data Center applications. Required Professional And Technical Skills: Functional Verification / Validation of Processors or ASICs. Computer architecture knowledge, Processor core design specifications, instruction set architecture and logic verification. Multi-processor cache coherency, Memory subsystem, IO subsystem knowledge, any of the protocols like PCIE/CXL, DDR, Flash, Ethernet etc Strong C/C++programming skills in a Unix/Linux environment required Great scripting skills – Perl / Python/Shell Development experience on Linux/Unix environments and in GIT repositories and basic understanding of Continues Integration and DevOps workflow. Understand Verilog / VHDL , verification coverage closure Proven problem-solving skills and the ability to work in a team environment are a must

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Introduction IBM Infrastructure division builds Servers, Storage, Systems and Cloud Software which are the building blocks for next-generation IT infrastructure of enterprise customers and data centers. IBM Servers provide best-in-class reliability, scalability, performance, and end-to-end security to handle mission-critical workloads and provide seamless extension to hybrid multicloud environments. India Systems Development Lab (ISDL) is part of word-wide IBM Infrastructure division. Established in 1996, the ISDL Lab is headquartered in Bengaluru, with presence in Pune and Hyderabad as well. ISDL teams work across the IBM Systems stack including Processor development (Power and IBM Z), ASCIs, Firmware, Operating Systems, Systems Software, Storage Software, Cloud Software, Performance & Security Engineering, System Test etc. The lab also focuses on innovations, thanks to the creative energies of the teams. The lab has contributed over 400+ patents in cutting edge technologies and inventions so far. ISDL teams also ushered in new development models such as Agile, Design Thinking and DevOps. Your Role And Responsibilities As a Software Engineer at IBM India Systems Development Lab (IBM ISDL), you will get an opportunity to work on all the phases of product development (Design/Development, Test and Support) across core Systems technologies including Operating Systems, Firmware, Systems Software, Storage Software & Cloud Software. As a Software Developer At ISDL: You will be focused on development of IBM Systems products interfacing with development & product management teams and end users, cutting across geos. You would analyze product requirements, determine the best course of design, implement/code the solution and test across the entire product development life cycle. One could also work on Validation and Support of IBM Systems products. You get to work with a vibrant, culture driven and technically accomplished teams working to create world-class products and deployment environments, delivering an industry leading user experience for our customers. You will be valued for your contributions in a growing organization with broader opportunities. At ISDL, work is more than a job - it's a calling: To build. To design. To code. To invent. To collaborate. To think along with clients. To make new products/markets. Not just to do something better, but to attempt things you've never thought was possible. Are you ready to lead in this new era of technology and solve some of the most challenging problems in Systems Software technologies? If so, let’s talk. Required Technical And Professional Expertise Required Technical Expertise: Knowledge of Operating Systems, OpenStack, Kubernetes, Container technologies, Cloud concepts, Security, Virtualization Management, REST API, DevOps (Continuous Integration) and Microservice Architecture. Strong programming skills in C, C++, Go Lang, Python, Ansible, Shell Scripting. Comfortable in working with Github and leveraging Open source tools. AI Software Engineer: As a Software Engineer with IBM AI on Z Solutions teams, you will get the opportunity to get involved in delivering best-in class Enterprise AI Solutions on IBM Z and support IBM Customers while adopting AI technologies / Solutions into their businesses by building ethical, secure, trustworthy and sustainable AI solutions on IBM Z. You will be part of end to end solutions working along with technically accomplished teams. You will be working as a Full stack developer starting from understanding client challenges to providing solutions using AI. Required Technical Expertise: Knowledge of AI/ML/DL, Jupyter Notebooks, Linux Systems, Kubernetes, Container technologies, REST API, UI skills, Strong programming skills like – C, C++, R, Python, Go Lang and well versed with Linux platform. Strong understanding of Data Science, modern tools and techniques to derive meaningful insights Understanding of Machine learning (ML) frameworks like scikit- learn, XGBoost etc. Understanding of Deep Learning (DL) Frameworks like Tensorflow, PyTorch Understanding of Deep Learning Compilers (DLC) Natural Language Processing (NLP) skills Understanding of different CPU architectures (little endian, big endian). Familiar with open source databases PostGreSQL, MongoDB, CouchDB, CockroachDB, Redis, data sources, connectors, data preparations, data flows, Integrate, cleanse and shape data. IBM Storage Engineer: As a Storage Engineer Intern in a Storage Development Lab you would support the design, testing, and validation of storage solutions used in enterprise or consumer products. This role involves working closely with hardware and software development teams to evaluate storage performance, ensure data integrity, and assist in building prototypes and test environments. The engineer contributes to the development lifecycle by configuring storage systems, automating test setups, and analyzing system behavior under various workloads. This position is ideal for individuals with a foundational understanding of storage technologies and a passion for hands-on experimentation and product innovation. Preferred Technical Expertise: Practical working experience with Java, Python, GoLang, ReactJS, Knowledge of AI/ML/DL, Jupyter Notebooks, Storage Systems, Kubernetes, Container technologies, REST API, UI skills, Exposure to cloud computing technologies such as Red Hat OpenShift, Microservices Architecture, Kubernetes/Docker Deployment. Basic understanding of storage technologies: SAN, NAS, DAS Familiarity with RAID levels and disk configurations Knowledge of file systems (e.g., NTFS, ext4, ZFS) Experience with operating systems: Windows Server, Linux/Unix Basic networking concepts: TCP/IP, DNS, DHCP Scripting skills: Bash, PowerShell, or Python (for automation) Understanding of backup and recovery tools (e.g., Veeam, Commvault) Exposure to cloud storage: AWS S3, Azure Blob, or Google Cloud Storage Linux Developer: As a Linux developer, you would be involved in design and development of advanced features in the Linux OS for the next generation server platforms from IBM by collaboration with the Linux community. You collaborate with teams across the hardware, firmware, and upstream Linux kernel community to deliver these capabilities. Preferred Technical Expertise Excellent knowledge of the C programming language Knowledge of Linux Kernel internals and implementation principles. In-depth understanding of operating systems concepts, data structures, processor architecture, and virtualization Experience with working on open-source software using tools such git and associated community participation processes. Hardware Management Console (HMC) / Novalink Software Developer: As a Software Developer in HMC / Novalink team, you will work on design, development, and test of the Management Console for IBM Power Servers. You will be involved in user centric Graphical User Interface development and Backend for server and virtualization management solution development in Agile environment. Preferred Technical Expertise Strong Programming skills in in Core Java 8, C/C++ Web development skills in JavaScript (Frameworks such as Angular.js, React.js etc),, HTML, CSS and related technologies Experience in developing rich HTML applications Web UI Frameworks: Vaadin, React JS and UI styling libraries like Bootstrap/Material Knowledge of J2EE, JSP, RESTful web services and GraphQL API AIX Developer: AIX is a proprietary Unix operating system which runs on IBM Power Servers. It’s a secure, scalable, and robust open standards-based UNIX operating system which is designed to meet the needs of Enterprises class infrastructure. As an AIX developer, you would be involved in development, test or support of AIX OS features development or open source software porting/development for AIX OS Preferred Technical Expertise Strong Expertise in Systems Programming Skills (C, C++) Strong knowledge of operating systems concepts, data structures, algorithms Strong knowledge of Unix/Linux internals (Signals, IPC, Shared Memory,..etc) Expertise in developing/handling multi-threaded Applications. Good knowledge in any of the following areas User Space Applications File Systems, Volume Management Device Drivers Unix Networking, Security Container Technologies Linkers/Loaders Virtualization High Availability & clustering products Strong debugging and Problem-Solving skills Performance Engineer: As a performance Engineer , you will get an opportunity to conduct experiments and analysis to identify performance aspects for operating systems and Enterprise Servers. where you will be responsible for advancing the product roadmap by using your expertise in Linux operating system, building kernel , applying patches, performance characterization, optimization and hardware architecture to analyse performance of software/hardware combinations. You will be involved in conducting experiments and analysis to identify performance challenges and uncover optimization opportunities for IBM Power virtualization and cloud management software built on Open stack. The areas of work will be on characterization, analysis and fine-tune application software to help deliver optimal performance on IBM Power. Preferred Technical Expertise Experience in C/C++ programming Knowledge of Hypervisor, Virtualization concepts Good understanding of system HW , Operating System , Systems Architecture Strong skills in scripting Good problem solving, strong analytical and logical reasoning skills Familiar with server performance management and capacity planning Familiar with performance diagnostic methods and techniques Firmware Engineer: As a Firmware developer you will be responsible for designing and developing components and features independently in IBM India Systems Development Lab. ISDL works on end-to-end design, development across Power, Z and Storage portfolio. You would be a part of WW Firmware development organization and would be involved in designing & developing cutting edge features on the open source OpenBMC stack (https://github.com/openbmc/) and developing the open source embedded firmware code for bringing up the next generation enterprise Power, Z and LinuxONE Servers. You will get an opportunity work alongside with some of the best minds in the industry, forum and communities in the process of contributing to the portfolio. Preferred Technical Expertise Strong System Architecture knowledge Hands on programming skills with C, C++ , C on Linux Distros. Experience/exposure in Firmware/Embedded software design & development, Strong knowledge of Linux OS and Open Source development Experience with Open Source tools & scripting languages: Git, Gerrit, Jenkins, perl/python Other Skills (Common For All The Positions): Strong Communication, analytical, interpersonal & problem solving skills Ability to deliver on agreed goals and the ability to coordinate activities in the team/collaborate with others to deliver on the team vision. Ability to work effectively in a global team environment Enterprise System Design Software Engineer: The Enterprise Systems Design team is keen on hiring passionate Computer science and engineering graduates / Masters students, who can blend their architectural knowledge and programming skills to build the complex infrastructure geared to work for the Hybrid cloud and AI workloads. We have several opportunities in following areas of System & chip development team : Processor verification engineer Needs to develop the test infrastructure to verify the architecture and functionality of the IBM server processors/SOC or ASICs. Will be responsible to creatively think of all the scenarios to test and report the coverage. Work with design as well as other key stakeholders in identifying /debugging & Resolving logic design issues and deliver a quality design Processor Pre / Post silicon validation engineer As a validation engineer you would design and develop algorithms for Post Silicon Validation of next generation IBM server processors, SOCs and ASICs. Electronic design automation – Front & BE tool development. EDA tools development team is responsible for developing state of the art Front End verification , simulation , Formal verification tools , Place & Route, synthesis tools and Flows critical for designing & verifying high performance hardware design for IBM's next generation Systems (IBM P and Z Systems) which is used in Cognitive, ML, DL, and Data Center applications. Required Professional And Technical Skills: Functional Verification / Validation of Processors or ASICs. Computer architecture knowledge, Processor core design specifications, instruction set architecture and logic verification. Multi-processor cache coherency, Memory subsystem, IO subsystem knowledge, any of the protocols like PCIE/CXL, DDR, Flash, Ethernet etc Strong C/C++programming skills in a Unix/Linux environment required Great scripting skills – Perl / Python/Shell Development experience on Linux/Unix environments and in GIT repositories and basic understanding of Continues Integration and DevOps workflow. Understand Verilog / VHDL , verification coverage closure Proven problem-solving skills and the ability to work in a team environment are a must

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Vodafone Idea Limited is an Aditya Birla Group and Vodafone Group partnership. It is India’s leading telecom service provider. The Company provides pan India Voice and Data services across 2G, 3G and 4G platform. With the large spectrum portfolio to support the growing demand for data and voice, the company is committed to deliver delightful customer experiences and contribute towards creating a truly ‘Digital India’ by enabling millions of citizens to connect and build a better tomorrow. The Company is developing infrastructure to introduce newer and smarter technologies, making both retail and enterprise customers future ready with innovative offerings, conveniently accessible through an ecosystem of digital channels as well as extensive on-ground presence. The Company is listed on National Stock Exchange (NSE) and Bombay Stock Exchange (BSE) in India. We're proud to be an equal opportunity employer. At VIL, we know that diversity makes us stronger. We are committed to a collaborative, inclusive environment that encourages authenticity and fosters a sense of belonging. We strive for everyone to feel valued, connected and empowered to reach their potential and contribute their best. VIL's goal is to build and maintain a workforce that is diverse in experience and background but uniform in reflecting our Values of Passion, Boldness, Trust, Speed and Digital. Consequently, our recruiting efforts are directed towards attracting and retaining best and brightest talents. Our endeavour is to be First Choice for prospective employees. VIL ensures equal employment opportunity without discrimination or harassment based on race, colour, religion, creed, age, sex, sex stereotype, gender, gender identity or expression, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy, veteran or military service status, genetic information, or any other characteristic protected by law. VIL is an equal opportunity employer committed to diversifying its workforce. Role Senior Data Scientist Job Level/ Designation M3/ GM – Senior Data Scientist Function / Department BDAA and BI/ Marketing Location Mumbai Job Purpose Create business models to impact the revenue and ARPU of the customers using Predictive Modeling, Artificial Intelligence, Machine Learning and cloud technologies by managing a team of data scientist Key Result Areas/Accountabilities Hands on expertise in Predictive Modeling, Artificial Intelligence, Machine Learning and cloud technologies. Interacting with key Business Stakeholders & identifying pain points. Building Data Science approach & roadmap for identified business problem statement. Assessing the Model accuracy both Statistical as well as Business evaluation. Statistical Modelling of Data, through understanding of basic statistics, like hypothesis testing, significant analysis, probabilistic estimations, ANOVA, Variance-covariance analysis. Expertise in writing efficient, modularized and standardized code in python and R Experience of working with AWS tools and technologies is preferred. 8 .In depth understanding of Feature engineering and selection approaches, Segmentation or stratified analyses, anomaly detection, pattern detection and data transformation approaches. Classification; Machine/Deep Learning - Algorithm Evaluation, Dataset Preparation. Excellent understanding of version control and best practices. People's person, while being great team member exhibits mentoring attitude Well versed with writing efficient SQL scripts for data acquisition and data aggregation Experience 2-3 Managing & mentoring a team of Data Scientist & Task Allocation Core Competencies, Knowledge, Experience Self-Starter, Motivated, organize, and excellent communicator Compulsorily being Hands-on Persistence and ability to think logically and independently Quick learner and adapting to changing business needs Must Have Technical / Professional Qualifications Python Cloud Computing - AWS Components ( EC2, S3,EMR,etc) Machine/ Deep Learning – Logistic Regression, XGBoost, SVM, Kmeans, ANN, CNN, LSTM, LLM etc.4. Deep Learning Frameworks : Caffe, Keras, Theano, Tensorflow, or Torch. Experience working on Big Data technologies (e.g. Hadoop MR, Hive, NoSQL, Spark, Kafka, Graph Databases etc.) Qualification BE, BTech Computer Science/Post Graduate Degree in Computer Applications or MCA Any Certification in Machine Learning / Deep Learning - Artificail Intelligence / Data Science MBA or an equivalent business analytics degree Vodafone Idea Limited (formerly Idea Cellular Limited) An Aditya Birla Group & Vodafone partnership

Posted 1 week ago

Apply

5.0 - 10.0 years

30 - 45 Lacs

Hyderabad, Chennai

Hybrid

Salary : 30 to 45 LPA Exp: 6 to 10 years Location :Hyderabad (Hybrid) Notice: Immediate to 30 days..!! Roles & responsibilities: 5+ years exp on Python , ML and Banking model development Interact with the client to understand their requirements and communicate / brainstorm solutions, model Development: Design, build, and implement credit risk model. Contribute to how analytical approach is structured for specification of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems 5+ years exp on ML/Python (predictive modelling) . Design, implement, test, deploy and maintain innovative data and machine learning solutions to accelerate our business. Create experiments and prototype implementations of new learning algorithms and prediction techniques Collaborate with product managers, and stockholders to design and implement software solutions for science problems Use machine learning best practices to ensure a high standard of quality for all of the team deliverables Has experience working on unstructured data ( text ): Text cleaning, TFIDF, text vectorization Hands-on experience with IFRS 9 models and regulations. Data Analysis: Analyze large datasets to identify trends and risk factors, ensuring data quality and integrity. Statistical Analysis: Utilize advanced statistical methods to build robust models, leveraging expertise in R programming. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to align models with business needs. Continuous Improvement: Stay updated with the latest methodologies and tools in credit risk modeling and R programming.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Snapmint: India’s booming consumer market has over 300 million credit-eligible consumers, yet only 35million actively use credit cards. At Snapmint, we are building a better alternative to credit cards that lets consumers buy now and pay later for a wide variety of products, be it shoes, clothes, fashion accessories, clothes or mobile phones. We firmly believe that an enduring financial services business must be built on the bedrock of providing honest, transparent and fair terms. Founded in 2017, today we are the leading online zero-cost EMI provider in India. We have served over 10M consumers across 2,200 cities and are doubling year on year. Our founders are serial entrepreneurs and alumni of IIT Bombay and ISB with over two decades of experience across leading organizations like Swiggy, Oyo, Maruti Suzuki and ZS Associates before successfully scaling and exiting businesses in patent analytics, ad-tech and bank-tech software services. Role Overview: We are seeking a Data Scientist with 2–3 years of experience to build and deploy machine learning models in the fintech domain. The role involves working on credit risk, fraud detection, and customer analytics using advanced statistical techniques and scalable data platforms. The ideal candidate should demonstrate strong technical proficiency and the ability to translate data insights into actionable business outcomes. Key Responsibilities: Build, deploy, and maintain machine learning models for: Credit underwriting & credit scoring Fraud detection & transaction monitoring Customer lifetime value prediction Pricing & offer personalization Collections optimization Design experiments and A/B tests to measure the effectiveness of BNPL products and features. Conduct exploratory data analysis to identify business insights and opportunities. Collaborate with data engineering to ensure robust data pipelines, feature stores, and model monitoring systems. Translate complex data science solutions into actionable business recommendations. Continuously improve model accuracy, fairness, and interpretability. Requirements: Bachelor’s degree in Engineering 2-3 years of experience in data science or machine learning (fintech or financial services is a plus). Strong programming skills in Python, SQL. Experience in Java is a plus. Experience with ML libraries (scikit-learn, XGBoost, LightGBM, TensorFlow, PyTorch, etc.) Familiarity with risk modeling, credit scoring techniques, and time series forecasting. Understanding of statistical techniques like hypothesis testing, regression, clustering, survival analysis, etc. Experience working with large-scale data in cloud platforms (AWS, GCP, Azure). Strong communication skills with ability to simplify complex concepts for business stakeholders. Location: Bangalore Working days: Monday - Friday

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Competetive Salary PF and Gratuity About Our Client Our client is an international professional services brand of firms, operating as partnerships under the brand. It is the second-largest professional services network in the world Job Description Position: ML Engineer Job type: Techno-Functional Preferred education qualifications: Bachelor/ Master's degree in computer science, Data Science, Machine Learning OR related technical degree Job location: India Geography: SAPMENA Required experience: 6-8 Years Preferred profile/ skills: 5+ years in developing and deploying enterprise-scale ML solutions [Mandatory] Proven track record in data analysis (EDA, profiling, sampling), data engineering (wrangling, storage, pipelines, orchestration), [Mandatory] Proficiency in Data Science/ML algorithms such as regression, classification, clustering, decision trees, random forest, gradient boosting, recommendation, dimensionality reduction [Mandatory] Experience in ML algorithms such as ARIMA, Prophet, Random Forests, and Gradient Boosting algorithms (XGBoost, LightGBM, CatBoost) [Mandatory] Prior experience on MLOps with Kubeflow or TFX [Mandatory] Experience in model explainability with Shapley plot and data drift detection metrics. [Mandatory] Advanced programming skills with Python and SQL [Mandatory] Prior experience on building scalable ML pipelines & deploying ML models on Google Cloud [Mandatory] Proven expertise in ML pipeline optimization and monitoring the model's performance over time [Mandatory] Proficiency in version control systems such as GitHub Experience with feature engineering optimization and ML model fine tuning is preferred Google Cloud Machine Learning certifications will be a big plus Experience in Beauty or Retail/FMCG industry is preferred Experience in training with large volume of data (>100 GB) Experience in delivering AI-ML projects using Agile methodologies is preferred Proven ability to effectively communicate technical concepts and results to technical & business audiences in a comprehensive manner Proven ability to work proactively and independently to address product requirements and design optimal solutions Fluency in English, strong communication and organizational capabilities; and ability to work in a matrix/ multidisciplinary team Job objectives: Design, develop, deploy, and maintain data science and machine learning solutions to meet enterprise goals. Collaborate with product managers, data scientists & analysts to identify innovative & optimal machine learning solutions that leverage data to meet business goals. Contribute to development, rollout and onboarding of data scientists and ML use-cases to enterprise wide MLOps framework. Scale the proven ML use-cases across the SAPMENA region. Be responsible for optimal ML costs. Job description: Deep understanding of business/functional needs, problem statements and objectives/success criteria Collaborate with internal and external stakeholders including business, data scientists, project and partners teams in translating business and functional needs into ML problem statements and specific deliverables Develop best-fit end-to-end ML solutions including but not limited to algorithms, models, pipelines, training, inference, testing, performance tuning, deployments Review MVP implementations, provide recommendations and ensure ML best practices and guidelines are followed Act as 'Owner' of end-to-end machine learning systems and their scaling Translate machine learning algorithms into production-level code with distributed training, custom containers and optimal model serving Industrialize end-to-end MLOps life cycle management activities including model registry, pipelines, experiments, feature store, CI-CD-CT-CE with Kubeflow/TFX Accountable for creating, monitoring drifts leveraging continuous evaluation tools and optimizing performance and overall costs Evaluate, establish guidelines, and lead transformation with emerging technologies and practices for Data Science, ML, MLOps, Data Ops The Successful Applicant Position: ML Engineer Job type: Techno-Functional Preferred education qualifications: Bachelor/ Master's degree in computer science, Data Science, Machine Learning OR related technical degree Job location: India Geography: SAPMENA Required experience: 6-8 Years Preferred profile/ skills: 5+ years in developing and deploying enterprise-scale ML solutions [Mandatory] Proven track record in data analysis (EDA, profiling, sampling), data engineering (wrangling, storage, pipelines, orchestration), [Mandatory] Proficiency in Data Science/ML algorithms such as regression, classification, clustering, decision trees, random forest, gradient boosting, recommendation, dimensionality reduction [Mandatory] Experience in ML algorithms such as ARIMA, Prophet, Random Forests, and Gradient Boosting algorithms (XGBoost, LightGBM, CatBoost) [Mandatory] Prior experience on MLOps with Kubeflow or TFX [Mandatory] Experience in model explainability with Shapley plot and data drift detection metrics. [Mandatory] Advanced programming skills with Python and SQL [Mandatory] Prior experience on building scalable ML pipelines & deploying ML models on Google Cloud [Mandatory] Proven expertise in ML pipeline optimization and monitoring the model's performance over time [Mandatory] Proficiency in version control systems such as GitHub Experience with feature engineering optimization and ML model fine tuning is preferred Google Cloud Machine Learning certifications will be a big plus Experience in Beauty or Retail/FMCG industry is preferred Experience in training with large volume of data (>100 GB) Experience in delivering AI-ML projects using Agile methodologies is preferred Proven ability to effectively communicate technical concepts and results to technical & business audiences in a comprehensive manner Proven ability to work proactively and independently to address product requirements and design optimal solutions Fluency in English, strong communication and organizational capabilities; and ability to work in a matrix/ multidisciplinary team Job objectives: Design, develop, deploy, and maintain data science and machine learning solutions to meet enterprise goals. Collaborate with product managers, data scientists & analysts to identify innovative & optimal machine learning solutions that leverage data to meet business goals. Contribute to development, rollout and onboarding of data scientists and ML use-cases to enterprise wide MLOps framework. Scale the proven ML use-cases across the SAPMENA region. Be responsible for optimal ML costs. Job description: Deep understanding of business/functional needs, problem statements and objectives/success criteria Collaborate with internal and external stakeholders including business, data scientists, project and partners teams in translating business and functional needs into ML problem statements and specific deliverables Develop best-fit end-to-end ML solutions including but not limited to algorithms, models, pipelines, training, inference, testing, performance tuning, deployments Review MVP implementations, provide recommendations and ensure ML best practices and guidelines are followed Act as 'Owner' of end-to-end machine learning systems and their scaling Translate machine learning algorithms into production-level code with distributed training, custom containers and optimal model serving Industrialize end-to-end MLOps life cycle management activities including model registry, pipelines, experiments, feature store, CI-CD-CT-CE with Kubeflow/TFX Accountable for creating, monitoring drifts leveraging continuous evaluation tools and optimizing performance and overall costs Evaluate, establish guidelines, and lead transformation with emerging technologies and practices for Data Science, ML, MLOps, Data Ops What's on Offer Competitive compensation commensurate with role and skill set Medical Insurance Coverage worth of 10 Lacs Social Benifits including PF & Gratuity A fast-paced, growth-oriented environment with the associated (challenges and) rewards Opportunity to grow and develop your own skills and create your future Contact: Anwesha Banerjee Quote job ref: JN-072025-6793565

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 15 Lacs

Nagpur

Remote

Overview: Be part of an innovative and forward-thinking environment! Join our dynamic IT team as a full-time AI/ML Engineer and take advantage of exciting growth opportunities. Requirement: Experience: 3+ Years Mandatory skills: ML & AI: ScikitLearn, TensorFlow, PyTorch, XGBoost, SHAP, LIME, LSTM, TimeSeries Forecasting NLP & LLMs: LangChain, HuggingFace, Transformers, FAISS, RAG, Gemini APIs, BLIP2 Register for a Global opportunity on the world's first & only Global Technology Job Portal: www.iitjobs.com Download our app on the Apple App Store and Google Play Store! Refer and earn 50,000!

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bharuch, Gujarat

On-site

Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As part of ACL Digital, an ALTEN Group Company, you will be contributing to digital product innovation and engineering as a key player. Our focus lies in assisting our clients in the design and development of cutting-edge products that are AI, Cloud, and Mobile ready, along with creating content and commerce-driven platforms. Through a design-led Digital Transformation framework, we facilitate the creation of connected, converged digital experiences tailored for the modern world. By leveraging our expertise in strategic design, engineering, and industry knowledge, you will play a crucial role in helping our clients navigate the digital landscape, thereby accelerating their growth trajectory. Headquartered in Silicon Valley, ACL Digital is a frontrunner in design-led digital experiences, innovation, enterprise modernization, and product engineering services, particularly within the Technology, Media & Telecom sectors. We are proud of our diverse and skilled workforce, which is part of the larger ALTEN Group comprising over 50,000 employees spread across 30+ countries, fostering a multicultural workplace and a collaborative knowledge-sharing environment. In India, our operations span across Bangalore, Chennai, Pune, Panjim, Hyderabad, Noida, and Ahmedabad, while in the USA, we have established offices in California, Atlanta, Philadelphia, and Washington states. As a suitable candidate for this role, you are expected to possess the following technical skills and competencies: - A minimum of 4-5 years of relevant experience in the field - Preferably trained or certified in Data Science/Machine Learning - Capable of effectively collaborating with technical leads - Strong communication skills coupled with the ability to derive meaningful conclusions - Proficiency in Data Science concepts, Machine Learning algorithms & Libraries like Scikit-learn, Numpy, Pandas, Stattools, Tensorflow, PyTorch, XGBoost - Experience in Machine Learning Training and Deployment pipelines - Familiarity with FastAPI/Flask framework - Proficiency in Docker and Virtual Environment - Proficient in Database Operations - Strong analytical and problem-solving skills - Ability to excel in a dynamic environment with varying degrees of ambiguity Your role would involve the following responsibilities and competencies: - Applying data mining, quantitative analysis, statistical techniques, and conducting experiments to derive reliable insights from data - Understanding business use-cases and utilizing various sources to collect and annotate datasets for business problems - Possessing a strong academic background with excellent analytical skills and exposure to machine learning and information retrieval domain and technologies - Strong programming skills with the ability to work in languages such as Python, C/C++ - Acquiring data from primary or secondary sources and performing data tagging - Filtering and cleaning data based on business requirements and maintaining a well-defined, structured, and clean database - Working on data labeling tools and annotating data for machine learning models - Interpreting data, analyzing results using statistical techniques and models, and conducting exploratory analysis If you are someone who thrives in a challenging and dynamic environment and possesses the required technical skills and competencies, we look forward to having you join our team at ACL Digital.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for developing, training, and fine-tuning Machine Learning models for AI/ML applications. This includes designing and implementing data pipelines for data processing, model training, and inference. Additionally, you will be deploying models using MLOps and integrating them with cloud infrastructure. Collaboration with product managers and designers to conceptualize AI-driven features will also be a key part of your role. You will also be expected to research and implement various ML and AI techniques to improve performance. To excel in this role, you should have proficiency in Python and ML frameworks such as Scikit-learn, XGBoost, TensorFlow, PyTorch. Experience with SQL and ETL data pipelines, including data processing and feature engineering, will be beneficial. Familiarity with Docker and container-based deployments to create cloud-agnostic products is required. A strong understanding of AI and Machine Learning concepts such as Supervised Learning, Unsupervised Learning, Deep Learning, and Reinforcement Learning is essential. Knowledge of at least one cloud platform (AWS, Azure, GCP) and ML deployment strategies, preferably Azure, is preferred. Exposure to LLMs (e.g., OpenAI, Hugging Face, Mistral) and foundation models will be an advantage. Understanding of various Statistical models is also expected. If you have 5 to 7 years of experience in the relevant field and possess the mentioned skills and qualifications, we would like to hear from you.,

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Scientist Location: Gurugram Experience: 5–10 years (flexible based on expertise) Employment Type: Full-Time About the Role: We are looking for a highly skilled and innovative Data Scientist with deep expertise in Machine Learning, AI, and Cloud Technologies to join our dynamic analytics team. The ideal candidate will have hands-on experience in NLP, LLMs, Computer Vision , and advanced statistical techniques, along with the ability to lead cross-functional teams and drive data-driven strategies in a fast-paced environment. Key Responsibilities: Develop and deploy end-to-end machine learning pipelines including data preprocessing, modeling, evaluation, and production deployment. Work on cutting-edge AI/ML applications such as LLM-finetuning, NLP, Computer Vision, Hybrid Recommendation Systems , and RAG/CAG techniques . Leverage platforms like AWS (SageMaker, EC2) and Databricks for scalable model development and deployment. Handle data at scale using Spark, Python, SQL , and integrate with NoSQL and Vector Databases (Neo4j, Cassandra) . Design interactive dashboards and visualizations using Tableau for actionable insights. Collaborate with cross-functional stakeholders to translate business problems into analytical solutions. Guide data curation efforts and ensure high-quality training datasets for supervised and unsupervised learning. Lead initiatives around AutoML, XGBoost, Topic Modeling (LDA/LSA), Doc2Vec , and Object Detection & Tracking . Drive agile practices including Sprint Planning, Resource Allocation, and Change Management . Communicate results and recommendations effectively to executive leadership and business teams. Mentor junior team members and foster a culture of continuous learning and innovation. Technical Skills Required: Programming: Python, SQL, Spark Machine Learning & AI: NLP, LLMs, Deep Learning, Computer Vision, Hybrid Recommenders Techniques: RAG, CAG, LLM-Finetuning, Statistical Modeling, AutoML, Doc2Vec Data Platforms: AWS (SageMaker, EC2), Databricks Databases: SQL, NoSQL, Neo4j, Cassandra, Vector DBs Visualization Tools: Tableau Certifications (Preferred): IBM Data Science Specialization Deep Learning Nanodegree (Udacity) SAFe® DevOps Practitioner Certified Agile Scrum Master Professional Competencies: Proven experience in team leadership, stakeholder management , and strategic planning . Strong cross-functional collaboration and ability to drive alignment across product, engineering, and analytics teams. Excellent problem-solving, communication, and decision-making skills. Ability to manage conflict resolution, negotiation , and performance optimization within teams.

Posted 1 week ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Overview: We are looking for a curious, analytical, and technically skilled Data Science Engineer with 03 years of experience to join our growing data team. This role is ideal for recent graduates or junior professionals eager to work on real-world machine learning and data engineering challenges. You will help develop data-driven solutions, design models, and deploy scalable data pipelines that support business decisions and product innovation. Key Responsibilities: Assist in designing and deploying machine learning models and predictive analytics solutions. Build and maintain data pipelines using tools such as Airflow, Spark, or Pandas. Conduct data wrangling, cleansing, and feature engineering on large datasets. Collaborate with data scientists, analysts, and engineers to operationalize models in production. Develop dashboards, reports, or APIs to expose model insights to stakeholders. Continuously monitor model performance and data quality. Stay updated with new tools, technologies, and industry trends in AI and data science. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Statistics, Engineering, or a related field. 0–3 years of hands-on experience in data science, machine learning, or data engineering (internships and academic projects welcome). Proficiency in Python and data science libraries (e.g., pandas, NumPy, scikit-learn, matplotlib). Familiarity with SQL and working with relational databases. Understanding of fundamental machine learning concepts and algorithms. Knowledge of version control systems (e.g., Git). Strong problem-solving skills and a willingness to learn. Nice-to-Have: Exposure to ML frameworks like TensorFlow, PyTorch, or XGBoost. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with MLOps tools like MLflow, Kubeflow, or SageMaker. Understanding of big data tools (e.g., Spark, Hadoop). Experience working on data science projects or contributions on GitHub/Kaggle. What We Offer: Real-world experience with data science in production environments Mentorship and professional development support Access to modern tools, technologies, and cloud platforms Competitive salary with performance incentives A collaborative and learning-focused culture Flexible work options (remote/hybrid) How to Apply: Send your updated resume to careers@jasra.in

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role-AI/ML Engineer Mandatory Skills - Python, ML libraries (Pytorch, Tensorflow), Gen AI, Kubernetes, NLP Good to have skill- MLOps Location: Hyderabad Only Work Type-Work from Office (5 Days in a week) Experience-4 to 8 Yrs. Skills Required - Strong programming skills in Python, Java, Spring Boot, or Scala. Experience with ML frameworks like TensorFlow, PyTorch, XGBoost, TensorFlow or LightGBM. Familiarity with information retrieval techniques (BM25, vector search, learning-to-rank). Knowledge of embedding models, user/item vectorization, or session-based personalization. Experience with large-scale distributed systems (e.g., Spark, Kafka, Kubernetes). Hands-on experience with real-time ML systems. Background in NLP, graph neural networks, or sequence modeling. Experience with A/B testing frameworks and metrics like NDCG, MAP, or CTR.

Posted 1 week ago

Apply

1.0 years

3 - 4 Lacs

India

On-site

About Us: Red & White Education Pvt. Ltd., established in 2008, is Gujarats top NSDC & ISO-certified institute focused on skill-based education and global employability. Role Overview: Were hiring a full-time Onsite AI, Machine Learning, and Data Science Faculty/ Trainer with strong communication skills and a passion for teaching, Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML : 1 year (Required) Data science: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Delhi, India

Remote

About Us: HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names. Our People With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home. Our Impact As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen. About the Role: As a Senior Data Scientist, you will design and deploy AI-driven systems that support key business functions like Sales, Customer Success, and Product. You’ll own the end-to-end lifecycle from experimentation to production, applying techniques like predictive modeling, real-time scoring, and AI agent orchestration. Working cross-functionally, you’ll translate data into automation and decision-making tools that drive measurable business outcomes. Requirements: 8+ years in data science, ML, or applied AI roles, ideally within SaaS (B2B or PLG preferred) Expert in SQL, Python, and modeling frameworks (e.g. scikit-learn, XGBoost, LightGBM) Proven experience building and deploying predictive models in production (churn, conversion, LTV, usage drop-off) Experience in fine-tuning models either with FFT or LORA (Or variants of) Strong hands-on experience with OpenAI models, LangChain, and agent orchestration tools Demonstrated prompt engineering capability: designing and refining system and task-specific prompts Experience implementing retrieval-augmented generation (RAG) using embeddings and vector DBs (Pinecone, FAISS, etc.) Experience testing, training, and deploying models/agents via Cloudflare Workers or equivalent serverless environments Familiarity with streaming usage data pipelines and real-time behavioral scoring Strong storytelling skills: you can articulate technical work to non-technical stakeholders clearly and persuasively Responsibilities: Develop and fine-tune machine learning models using advanced algorithms like gradient boosting (XGBoost, LightGBM) and lightweight neural networks to better grade customer churn, account health decline, upsell opportunities, and trial conversion rates Pull data from feature sets across CRM, product usage, support, and NPS. Cleanse and transform data to form a holistic view of account health. Build production-grade models to predict churn, account health decline, usage slowness, upsell opportunity, and trial conversion Create real-time scoring mechanisms to alert GTM teams about at-risk customers and under-engaged segments Use OpenAI models, LangChain (or equivalent) or open source models to build intelligent assistants, auto-analysis agents, and retrieval-based matchers Design prompts and agent flows to answer RevOps questions, generate insight summaries, and automate interventions Implement retrieval-augmented generation (RAG) architectures using vector databases (e.g., Pinecone, FAISS) EEO Statement: At HighLevel, we value diversity. In fact, we understand it makes our organisation stronger. We are committed to inclusive hiring/promotion practices that evaluate skill sets, abilities, and qualifications without regard to any characteristic unrelated to performing the job at the highest level. Our objective is to foster an environment where really talented employees from all walks of life can be their true and whole selves, cherished and welcomed for their differences while providing excellent service to our clients and learning from one another along the way! Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Umami Bioworks, we are a leading bioplatform for the development and production of sustainable planetary biosolutions. Through the synthesis of machine learning, multi- omics biomarkers, and digital twins, UMAMI has established market-leading capability for discovery and development of cultivated bioproducts that can seamlessly transition to manufacturing with UMAMI’s modular, automated, plug-and-play production solution By partnering with market leaders as their biomanufacturing solution provider, UMAMI is democratizing access to sustainable blue bioeconomy solutions that address a wide range of global challenges. We’re a venture-backed biotech startup located in Singapore where some of the world’s smartest, most passionate people are pioneering a sustainable food future that is attractive and accessible to people around the world. We are united by our collective drive to ask tough questions, take on challenging problems, and apply cutting-edge science and engineering to create a better future for humanity. At Umami Bioworks, you will be encouraged to dream big and will have the freedom to create, invent, and do the best, most impactful work of your career. Umami Bioworks is looking to hire an inquisitive, innovative, and independent Machine Learning Engineer to join our R&D team in Bangalore, India, to develop scalable, modular ML infrastructure integrating predictive and optimization models across biological and product domains. The role focuses on orchestrating models for media formulation, bioprocess tuning, metabolic modeling, and sensory analysis to drive data-informed R&D. The ideal candidate combines strong software engineering skills with multi-model system experience, collaborating closely with researchers to abstract biological complexity and enhance predictive accuracy. Responsibilities Design and build the overall architecture for a multi-model ML system that integrates distinct models (e.g., media prediction, bioprocess optimization, sensory profile, GEM-based outputs) into a unified decision pipeline Develop robust interfaces between sub-models to enable modularity, information flow, and cross-validation across stages (e.g., outputs of one model feeding into another) Implement model orchestration logic to allow conditional routing, fallback mechanisms, and ensemble strategies across different models Build and maintain pipelines for training, testing, and deploying multiple models across different data domains Optimize inference efficiency and reproducibility by designing clean APIs and containerized deployments Translate conceptual product flow into technical architecture diagrams, integration roadmaps, and modular codebases Implement model monitoring and versioning infrastructure to track performance drift, flag outliers, and allow comparison across iterations Collaborate with data engineers and researchers to abstract away biological complexity and ensure a smooth ML-only engineering focus Lead efforts to refactor and scale ML infrastructure for future integrations (e.g., generative layers, reinforcement learning modules) Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Computational Biology, Data Science, or a related field Proven experience developing and deploying multi-model machine learning systems in a scientific or numerical domain Exposure to hybrid modeling approaches and/or reinforcement learning strategies Experience Experience with multi-model systems Worked with numerical/scientific datasets (multi-modal datasets) Hybrid modelling and/or RL (AI systems) Core Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, scikit-learn, XGBoost, CatBoost Model Orchestration: MLflow, Prefect, Airflow Multi-model Systems: Ensemble learning, model stacking, conditional pipelines Reinforcement Learning: RLlib, Stable-Baselines3 Optimization Libraries: Optuna, Hyperopt, GPyOpt Numerical & Scientific Computing: NumPy, SciPy, panda Containerization & Deployment: Docker, FastAPI Workflow Management: Snakemake, Nextflow ETL & Data Pipelines: pandas pipelines, PySpark Data Versioning: Git API Design for modular ML blocks You will work directly with other members of our small but growing team to do cutting-edge science and will have the autonomy to test new ideas and identify better ways to do things.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Role: Data Engineer (2–4 Years Experience) 📍 Location: Jaipur / Pune (Work from Office) 📧 hr@cognitivestars.com | 📞 99291-89819 We're looking for a Data Engineer (2–4 years) who’s excited about building scalable ETL pipelines , working with Azure Data Lake and Databricks , and supporting AI/ML readiness across real-world datasets. What You'll Do: Design robust, reusable Python-based ETL pipelines from systems like SAP & OCPLM Clean & transform large-scale datasets for analytics & ML Work with Azure Data Lake , Databricks , and modern cloud tools Collaborate with analytics teams to support predictive and prescriptive models Drive data automation and ensure data quality & traceability What You’ll Bring: 2–4 years of experience in data engineering or analytics programming Strong skills in Python & SQL Experience with Azure , Databricks , or similar cloud platforms Familiarity with ML concepts (hands-on not mandatory) Ability to understand complex enterprise data even without direct system access Tools You'll Use: Python | Pandas | NumPy | SQL Azure Data Lake | Databricks scikit-learn | XGBoost (as needed)

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Chennai Area

On-site

Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers.

Posted 1 week ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Chandkheda, Ahmedabad, Gujarat

On-site

About Us: Red & White Education Pvt. Ltd., established in 2008, is Gujarats top NSDC & ISO-certified institute focused on skill-based education and global employability. Role Overview: Were hiring a full-time Onsite AI, Machine Learning, and Data Science Faculty/ Trainer with strong communication skills and a passion for teaching, Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML : 1 year (Required) Data science: 1 year (Required) Work Location: In person

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Analyst Location: Bengaluru, Karnataka, India About the Role We're looking for an experienced Data Analyst to join our team, focusing on building and enhancing underwriting products specifically for the Indian market. In this role, you'll be instrumental in developing sophisticated credit assessment frameworks and scoring models by leveraging diverse data sources. If you're passionate about data, have a deep understanding of the Indian financial services landscape, and thrive in a dynamic environment, we encourage you to apply. What You'll Do Be the powerhouse behind scalable and efficient solutions that span across a broad spectrum of fintech sectors—be it lending, insurance, investments. Our work isn't confined to a single domain. We tackle a diverse set of problem statements, from computer vision and tabular data to natural language processing, speech recognition, and even Generative AI. Each day brings a new challenge and a new opportunity for breakthroughs. What You'll Bring Bachelor's or Master's in Engineering or equivalent. 2+ years of Data Science/Machine Learning experience. Strong knowledge in statistics, tree-based techniques (e.g., Random Forests, XGBoost), machine learning (e.g., MLP, SVM), inference, hypothesis testing, simulations, and optimizations. Bonus: Experience with deep learning techniques; experience in working Ad domain/reinforcement learning. Strong Python programming skills and experience in building Data Pipelines in PySpark, along with feature engineering. Proficiency in pandas, scikit-learn, Scala, SQL, and familiarity with TensorFlow/PyTorch. Understanding of DevOps/MLOps, including creating Docker containers and deploying to production (using platforms like Databricks or Kubernetes).

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. The Defender Experts (DEX) Research team is at the forefront of Microsoft’s threat protection strategy, combining world-class hunting expertise with AI-driven analytics to protect customers from advanced cyberattacks. Our mission is to move protection left—disrupting threats early, before damage occurs—by transforming raw signals into intelligence that powers detection, disruption, and customer trust. We’re looking for a passionate and curious Data Scientist to join this high-impact team. In this role, you'll partner with researchers, hunters, and detection engineers to explore attacker behavior, operationalize entity graphs, and develop statistical and ML-driven models that enhance DEX’s detection efficacy. Your work will directly feed into real-time protections used by thousands of enterprises and shape the future of Microsoft Security. This is an opportunity to work on problems that matter—with cutting-edge data, a highly collaborative team, and the scale of Microsoft behind you. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Understand complex cybersecurity and business problems, translate them into well-defined data science problems, and build scalable solutions. Design and build robust, large-scale graph structures to model security entities, behaviors, and relationships. Develop and deploy scalable, production-grade AI/ML systems and intelligent agents for real-time threat detection, classification, and response. Collaborate closely with Security Research teams to integrate domain knowledge into data science workflows and enrich model development. Drive end-to-end ML lifecycle: from data ingestion and feature engineering to model development, evaluation, and deployment. Work with large-scale graph data: create, query, and process it efficiently to extract insights and power models. Lead initiatives involving Graph ML, Generative AI, and agent-based systems, driving innovation across threat detection, risk propagation, and incident response. Collaborate closely with engineering and product teams to integrate solutions into production platforms. Mentor junior team members and contribute to strategic decisions around model architecture, evaluation, and deployment. Qualifications Bachelor’s or Master’s degree in Computer Science, Statistics, Applied Mathematics, Data Science, or a related quantitative field 5+ years of experience applying data science or machine learning in a real-world setting, preferably in security, fraud, risk, or anomaly detection Proficiency in Python and/or R, with hands-on experience in data manipulation (e.g., Pandas, NumPy), modeling (e.g., scikit-learn, XGBoost), and visualization (e.g., matplotlib, seaborn) Strong foundation in statistics, probability, and applied machine learning techniques Experience working with large-scale datasets, telemetry, or graph-structured data Ability to clearly communicate technical insights and influence cross-disciplinary teams Demonstrated ability to work independently, take ownership of problems, and drive solutions end-to-end Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

8.0 years

4 - 9 Lacs

Bengaluru

On-site

Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, our mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. What’s more, we are in collaboration with leading academics, industry experts and highly skilled engineers to equip our customers to generate sophisticated new insights from the biggest of big data. Join us as a Senior Advisor on our Data Science team in Bangalore to do the best work of your career and make a profound social impact. What you’ll achieve Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, our mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. What’s more, we are in collaboration with leading academics, industry experts and highly skilled engineers to equip our customers to generate sophisticated new insights from the biggest of big data. You will: Develop Gen AI-based solutions to tackle real-world challenges using extensive datasets of text, images, and more Design and manage experiments; research new algorithms and optimization methods Build and maintain data pipelines and platforms to operationalize Machine Learning models at scale Demonstrate a passion for blending software development with Gen AI and ML Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Design, implement, test and maintain ML solutions within Dell's services organization Engage in design discussions, code reviews, and interact with various stakeholders Collaborate across functions to influence business solutions with technical expertise Thrive in a startup-like environment, focusing on high-priority tasks Desired Requirements Proficiency in Data Science Platforms (Domino Data Lab, Microsoft Azure, AWS, Google Cloud) | Deep knowledge in ML, data mining, statistics, NLP, or related fields | Experience in object-oriented programming (C#, Java) and familiarity with Python, Spark, TensorFlow, XGBoost | Experience in productionizing ML models and scaling them for low-latency environments | Proficient in Data Mining, ETL, SQL OLAP, Teradata, Hadoop 8+ years of related experience with a bachelor’s degree; or 6+ years with a Master’s; or 3+ years with a PhD; or equivalent experience Who we are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 20th Aug 2025 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Job ID: R274037

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Senior Advisor, Data Science Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, our mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. What’s more, we are in collaboration with leading academics, industry experts and highly skilled engineers to equip our customers to generate sophisticated new insights from the biggest of big data. Join us as a Senior Advisor on our Data Science team in Bangalore to do the best work of your career and make a profound social impact. What You’ll Achieve Data Science is all about breaking new ground to enable businesses to answer their most urgent questions. Pioneering massively parallel data-intensive analytic processing, our mission is to develop a whole new approach to generating meaning and value from petabyte-scale data sets and shape brand new methodologies, tools, statistical methods, and models. What’s more, we are in collaboration with leading academics, industry experts and highly skilled engineers to equip our customers to generate sophisticated new insights from the biggest of big data. You will: Develop Gen AI-based solutions to tackle real-world challenges using extensive datasets of text, images, and more Design and manage experiments; research new algorithms and optimization methods Build and maintain data pipelines and platforms to operationalize Machine Learning models at scale Demonstrate a passion for blending software development with Gen AI and ML Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Design, implement, test and maintain ML solutions within Dell's services organization Engage in design discussions, code reviews, and interact with various stakeholders Collaborate across functions to influence business solutions with technical expertise Thrive in a startup-like environment, focusing on high-priority tasks Desired Requirements Proficiency in Data Science Platforms (Domino Data Lab, Microsoft Azure, AWS, Google Cloud) | Deep knowledge in ML, data mining, statistics, NLP, or related fields | Experience in object-oriented programming (C#, Java) and familiarity with Python, Spark, TensorFlow, XGBoost | Experience in productionizing ML models and scaling them for low-latency environments | Proficient in Data Mining, ETL, SQL OLAP, Teradata, Hadoop 8+ years of related experience with a bachelor’s degree; or 6+ years with a Master’s; or 3+ years with a PhD; or equivalent experience Who We Are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 20th Aug 2025 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Read the full Equal Employment Opportunity Policy here. Job ID: R274037

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a data science expert, you will be responsible for developing strategies and solutions to address various problems using cutting-edge machine learning, deep learning, and GEN AI techniques. Your role will involve leading a team of data scientists to ensure timely and high-quality delivery of project outcomes. You will analyze large and complex datasets across different domains, perform exploratory data analysis, and select features to build and optimize classifiers and regressors. Enhancing data collection procedures, ensuring data quality and accuracy, and presenting analytical results to technical and non-technical stakeholders will be key aspects of your job. You will create custom reports and presentations with strong data visualization skills to effectively communicate analytical conclusions to senior company officials and other stakeholders. Proficiency in data mining, EDA, feature selection, model building, and optimization using machine learning and deep learning techniques is essential. Your primary skills should include a deep understanding and hands-on experience with data science and machine learning techniques, algorithms for supervised and unsupervised problems, NLP, computer vision, and GEN AI. You should also have expertise in building deep learning models for text and image analytics using frameworks like ANNs, CNNs, LSTM, Transfer Learning, Encoder, and decoder. Proficiency in programming languages such as Python, R, and common data science tools like NumPy, Pandas, Matplotlib, and frameworks like TensorFlow, Keras, PyTorch, XGBoost is required. Experience with statistical inference, hypothesis testing, and cloud platforms like Azure/AWS, as well as deploying models in production, will be beneficial for this role. Excellent communication and interpersonal skills are necessary to convey complex analytical concepts to diverse stakeholders effectively.,

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies