Jobs
Interviews

237 Dask Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Data Science @Dream Sports: Data Science at Dream Sports comprises seasoned data scientists thriving to drive value with data across all our initiatives. The team has developed state-of-the-art solutions for forecasting and optimization, data-driven risk prevention systems, Causal Inference and Recommender Systems to enhance product and user experience. We are a team of Machine Learning Scientists and Research Scientists with a portfolio of projects ranges from production ML systems that we conceptualize, build, support and innovate upon, to longer term research projects with potential game-changing impact for Dream Sports. This is a unique opportunity for highly motivated candidates to work on real-world applications of machine learning in the sports industry, with access to state-of-the-art resources, infrastructure, and data from multiple sources streaming from 250 million users and contributing to our collaboration with Columbia Dream Sports AI Innovation Center. Your Role: Executing clean experiments rigorously against pertinent performance guardrails and analysing performance metrics to infer actionable findings Developing and maintaining services with proactive monitoring and can incorporate best industry practices for optimal service quality and risk mitigation Breaking down complex projects into actionable tasks that adhere to set management practices and ensure stakeholder visibility Managing end-to-end lifecycle of large scale ML projects from data preparation, model training, deployment, monitoring, and upgradation of experiments Leveraging a strong foundation in ML, statistics, and deep learning to adeptly implement research-backed techniques for model development Staying abreast of the best ML practices and developments of the industry to mentor and guide team members Qualifiers: 4-6 years of experience in building, deploying and maintaining ML solutions Extensive experience with Python, Sql, Tensorflow/Pytorch and atleast one distributed data framework (Spark/Ray/Dask ) Working knowledge of Machine Learning, probability & statistics and Deep Learning Fundamentals Experience in designing end to end machine learning systems that work at scale About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers.

Posted 1 day ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

As an expectation a fitting candidate must have/be: Ability to analyze business problem and cut through the data challenges. Ability to churn the raw corpus and develop a data/ML model to provide business analytics (not just EDA), machine learning based document processing and information retrieval Quick to develop the POCs and transform it to high scale production ready code. Experience in extracting data through complex unstructured documents using NLP based technologies. Good to have : Document analysis using Image processing/computer vision and geometric deep learning Technology Stack: Python as a primary programming language. Conceptual understanding of classic ML/DL Algorithms like Regression, Support Vectors, Decision tree, Clustering, Random Forest, CART, Ensemble, Neural Networks, CNN, RNN, LSTM etc. Programming: Must Have: Must be hands-on with data structures using List, tuple, dictionary, collections, iterators, Pandas, NumPy and Object-oriented programming Good to have: Design patterns/System design, cython ML libraries: Must Have: Scikit-learn, XGBoost, imblearn, SciPy, Gensim Good to have: matplotlib/plotly, Lime/sharp Data extraction and handling: Must Have: DASK/Modin, beautifulsoup/scrappy, Multiprocessing Good to have: Data Augmentation, Pyspark, Accelerate NLP/Text analytics: Must Have: Bag of words, text ranking algorithm, Word2vec, language model, entity recognition, CRF/HMM, topic modelling, Sequence to Sequence Good to have: Machine comprehension, translation, elastic search Deep learning: Must Have: TensorFlow/PyTorch, Neural nets, Sequential models, CNN, LSTM/GRU/RNN, Attention, Transformers, Residual Networks Good to have: Knowledge of optimization, Distributed training/computing, Language models Software peripherals: Must Have: REST services, SQL/NoSQL, UNIX, Code versioning Good to have: Docker containers, data versioning Research: Must Have: Well verse with latest trends in ML and DL area. Zeal to research and implement cutting areas in AI segment to solve complex problems Good to have: Contributed to research papers/patents and it is published on internet in ML and DL Morningstar is an equal opportunity employer. Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 2 days ago

Apply

4.0 - 9.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Job Posting TitleSR. DATA SCIENTIST Band/Level5-2-C Education ExperienceBachelors Degree (High School +4 years) Employment Experience5-7 years At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world. Job Overview Solves complex problems and help stakeholders make data- driven decisions by leveraging quantitative methods, such as machine learning. It often involves synthesizing large volume of information and extracting signals from data in a programmatic way. Roles & Responsibilities Key Responsibilities Design, train, and evaluate supervised & unsupervised models (regression, classification, clustering, uplift). Apply automated hyperparameter optimization (Optuna, HyperOpt) and interpretability techniques (SHAP, LIME). Perform deep exploratory data analysis (EDA) to uncover patterns & anomalies. Engineer predictive features from structured, semistructured, and unstructured data; manage feature stores (Feast). Ensure data quality through rigorous validation and automated checks. Build hierarchical, intermittent, and multiseasonal forecasts for thousands of SKUs. Implement traditional (ARIMA, ETS, Prophet) and deeplearning (RNN/LSTM, TemporalFusion Transformer) approaches. Reconcile forecasts across product/category hierarchies; quantify accuracy (MAPE, WAPE) and bias. Establish model tracking & registry (MLflow, SageMaker Model Registry). Develop CI/CD pipelines for automated retraining, validation, and deployment (Airflow, Kubeflow, GitHub Actions). Monitor data & concept drift; trigger retuning or rollback as needed. Design and analyze A/B tests, causal inference studies, and Bayesian experiments. Provide statisticallygrounded insights and recommendations to stakeholders. Translate business objectives into datadriven solutions; present findings to exec & nontech audiences. Mentor junior data scientists, review code/notebooks, and champion best practices. Desired Candidate Minimum Qualifications M.S. in Statistics (preferred) or related field such as Applied Mathematics, Computer Science, Data Science. 5+ years building and deploying ML models in production. Expertlevel proficiency in Python (Pandas, NumPy, SciPy, scikitlearn), SQL, and Git. Demonstrated success delivering largescale demandforecasting or timeseries solutions. Handson experience with MLOps tools (MLflow, Kubeflow, SageMaker, Airflow) for model tracking and automated retraining. Solid grounding in statistical inference, hypothesis testing, and experimental design. Preferred / NicetoHave Experience in supplychain, retail, or manufacturing domains with highgranularity SKU data. Familiarity with distributed data frameworks (Spark, Dask) and cloud data warehouses (BigQuery, Snowflake). Knowledge of deeplearning libraries (PyTorch, TensorFlow) and probabilistic programming (PyMC, Stan). Strong datavisualization skills (Plotly, Dash, Tableau) for storytelling and insight communication. Competencies ABOUT TE CONNECTIVITY TE Connectivity plc (NYSETEL) is a global industrial technology leader creating a safer, sustainable, productive, and connected future. Our broad range of connectivity and sensor solutions enable the distribution of power, signal and data to advance next-generation transportation, energy networks, automated factories, data centers, medical technology and more. With more than 85,000 employees, including 9,000 engineers, working alongside customers in approximately 130 countries, TE ensures that EVERY CONNECTION COUNTS. Learn more atwww.te.com and onLinkedIn , Facebook , WeChat, Instagram and X (formerly Twitter). WHAT TE CONNECTIVITY OFFERS: We are pleased to offer you an exciting total package that can also be flexibly adapted to changing life situations - the well-being of our employees is our top priority! Competitive Salary Package Performance-Based Bonus Plans Health and Wellness Incentives Employee Stock Purchase Program Community Outreach Programs / Charity Events IMPORTANT NOTICE REGARDING RECRUITMENT FRAUD TE Connectivity has become aware of fraudulent recruitment activities being conducted by individuals or organizations falsely claiming to represent TE Connectivity. Please be advised that TE Connectivity never requests payment or fees from job applicants at any stage of the recruitment process. All legitimate job openings are posted exclusively on our official careers website at te.com/careers, and all email communications from our recruitment team will come only from actual email addresses ending in @te.com . If you receive any suspicious communications, we strongly advise you not to engage or provide any personal information, and to report the incident to your local authorities. Across our global sites and business units, we put together packages of benefits that are either supported by TE itself or provided by external service providers. In principle, the benefits offered can vary from site to site.

Posted 2 days ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 3 days ago

Apply

15.0 - 19.0 years

0 Lacs

thane, maharashtra

On-site

The role of VP Data Scientist in Thane involves driving analytics opportunities in the BFSI sector, overseeing project delivery, managing teams and stakeholders. The ideal candidate should possess a robust background in advanced analytics, with a history of leading impactful analytics solutions and programs for Indian clients. Additionally, the candidate must have experience in establishing and supervising large-scale Business Intelligence units. With over 15 years of relevant experience, the right candidate should showcase expertise in various areas including data processing and data science libraries of Python such as NumPy, Pandas, Scikit learn. They should also demonstrate proficiency in handling massive datasets through tools like Apache Spark, Vaex, Dask, and leveraging them for machine learning algorithms development. Moreover, the candidate is expected to be well-versed in analytical models like promotion optimization, NLP, Cluster Analysis, Segmentation, Neural network models, Logistic Regression, ANN based model, LSTM, Transformers, and more. Familiarity with cloud-based analytics platforms like Azure, GCP, or AWS is crucial. Experience in automating model training, deployment, and monitoring using ML-Ops pipelines is preferred. Key responsibilities for this role include stakeholder management, project planning, scope definition, and project methodology explanation to stakeholders. The VP Data Scientist will lead end-to-end project deliverables, ensure adherence to SOPs, and provide training to junior team members. Managing a team of 20+ data scientists, overseeing project delivery, and addressing business challenges are essential aspects of this role. The ideal candidate should hold a degree in BE/B.Tech./MCA/M.Tech. and demonstrate deep expertise in Python, SAS. They must possess a strong track record of successful project management, stakeholder engagement, and team leadership within the BFSI sector.,

Posted 3 days ago

Apply

1.0 - 3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

TransUnion's Job Applicant Privacy Notice What We'll Bring This is role is for a Assoc Developer to work on building global plaforms that are hosted across multiple countries in both public and private cloud private cloud environments. We value people who are passionate around solving business problems through innovation and engineering practices. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative environment that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. What You'll Bring Objectives of this role Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your tasks Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Required Skills And Qualifications 1 - 3 years of experience as a Python Developer with a strong portfolio of projects. Experience working with Airflow Bachelor’s degree in computer science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred Skills And Qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community. Impact You'll Make At TransUnion, we are dedicated to finding ways information can be used to help people make better and smarter decisions. As a trusted provider of global information solutions, our mission is to help people around the world access the opportunities that lead to a higher quality of life, by helping organizations optimize their risk-based decisions and enabling consumers to understand and manage their personal information. Because when people have access to more complete and multidimensional information, they can make more informed decisions and achieve great things. Every day TransUnion offers our employees the tools and resources they need to find ways information can be used in diverse ways. Whether it is helping businesses better manage risk, providing better insights so a consumer can qualify for his first mortgage or working with law enforcement to make neighborhoods safer, we are improving the quality of life for individuals, families, communities and local economies around the world. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability status, veteran status, marital status, citizenship status, sexual orientation, gender identity or any other characteristic protected by law. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Assoc Developer, Applications Development

Posted 3 days ago

Apply

7.0 - 10.0 years

25 - 30 Lacs

Hyderabad, Chennai

Work from Office

Required Skills and Qualifications. 7+ years of experience as a Python Developer with a strong portfolio of projects. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred Skills and Qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community. Roles and Responsibilities Objectives of the Role Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Able to articulate, write the unit test cases to execute. Your Tasks Create large-scale data processing pipelines to help developers build and train novel machine learning/Artificial Intelligence algorithms and data engineering development. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development.

Posted 3 days ago

Apply

1.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

TransUnion's Job Applicant Privacy Notice What We'll Bring This is role is for a Assoc Developer to work on building global plaforms that are hosted across multiple countries in both public and private cloud private cloud environments. We value people who are passionate around solving business problems through innovation and engineering practices. We embrace a culture of experimentation and constantly strive for improvement and learning. You'll work in a collaborative environment that encourages diversity of thought and creative solutions that are in the best interests of our customers globally. What You'll Bring Objectives of this role Develop, test and maintain high-quality software using Python programming language. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your tasks Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms. Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions. Debugging codes when required and troubleshooting any Python-related queries. Keep up to date with emerging trends and technologies in Python development. Required Skills And Qualifications 1 - 3 years of experience as a Python Developer with a strong portfolio of projects. Experience working with Airflow Bachelor’s degree in computer science, Software Engineering or a related field. In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch. Experience with front-end development using HTML, CSS, and JavaScript. Familiarity with database technologies such as SQL and NoSQL. Excellent problem-solving ability with solid communication and collaboration skills. Preferred Skills And Qualifications Experience with popular Python frameworks such as Django, Flask or Pyramid. Knowledge of data science and machine learning concepts and tools. A working understanding of cloud platforms such as AWS, Google Cloud or Azure. Contributions to open-source Python projects or active involvement in the Python community. Impact You'll Make At TransUnion, we are dedicated to finding ways information can be used to help people make better and smarter decisions. As a trusted provider of global information solutions, our mission is to help people around the world access the opportunities that lead to a higher quality of life, by helping organizations optimize their risk-based decisions and enabling consumers to understand and manage their personal information. Because when people have access to more complete and multidimensional information, they can make more informed decisions and achieve great things. Every day TransUnion offers our employees the tools and resources they need to find ways information can be used in diverse ways. Whether it is helping businesses better manage risk, providing better insights so a consumer can qualify for his first mortgage or working with law enforcement to make neighborhoods safer, we are improving the quality of life for individuals, families, communities and local economies around the world. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability status, veteran status, marital status, citizenship status, sexual orientation, gender identity or any other characteristic protected by law. This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Assoc Developer, Applications Development

Posted 3 days ago

Apply

0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 83444 Date: Jul 27, 2025 Location: Delhi Designation: Deputy Manager Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team As a member of the Operation, Industry and domain solutions team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals. We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change. Your work profile Bachelor’s or advanced degree in computer science or engineering/technology related degree or data-intensive programs such as applied mathematics, economics, physics or statistics Data Scientist (great at Computer Vision and regular AI/ML (tabular data) Professional experience as ML Engineer/Data Scientist: Experience can be in machine learning, deep learning, data modelling, optimization algorithms, network analysis, in professional and production settings for multiple projects. Professional experience of using either PyTorch, Tensorflow, or Keras. Strong proficiency in wrangling datasets through Pandas, Dask or other Python based data science libraries. Strong fundamentals in solving analytical problems with various algorithmic solutions. Demonstrable experience in end-to-end system design: data analysis, feature engineering, technique selection & implementation, debugging, and maintenance in production. Good to have: Relevant field experience in the development sector via non- profits or at various levels of Indian governance is a plus. Knowledge of GIS (GeoPandas, Shapely, PyQGIS, Arcpy) is a plus. Working knowledge of SQL is a strong plus How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

Posted 4 days ago

Apply

0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 83445 Date: Jul 27, 2025 Location: Delhi Designation: Consultant Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team As a member of the Operation, Industry and domain solutions team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals. We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change. Your work profile Bachelor’s or advanced degree in computer science or engineering/technology related degree or data-intensive programs such as applied mathematics, economics, physics or statistics Data Scientist (great at Computer Vision and regular AI/ML (tabular data) Professional experience as ML Engineer/Data Scientist: Experience can be in machine learning, deep learning, data modelling, optimization algorithms, network analysis, in professional and production settings for multiple projects. Professional experience of using either PyTorch, Tensorflow, or Keras. Strong proficiency in wrangling datasets through Pandas, Dask or other Python based data science libraries. Strong fundamentals in solving analytical problems with various algorithmic solutions. Demonstrable experience in end-to-end system design: data analysis, feature engineering, technique selection & implementation, debugging, and maintenance in production. Good to have: Relevant field experience in the development sector via non- profits or at various levels of Indian governance is a plus. Knowledge of GIS (GeoPandas, Shapely, PyQGIS, Arcpy) is a plus. Working knowledge of SQL is a strong plus How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

Posted 4 days ago

Apply

6.0 years

0 Lacs

India

On-site

Buil Building and Deploying ML Models Design, build, optimize, deploy and monitor machine learning models for production use cases. Ensure scalability, reliability, and efficiency of ML pipelines across cloud and on-prem environments. Work with data engineers to design data pipelines that feed into ML models. Optimize model performance, ensuring low latency and high accuracy. Leading and Architecting ML Solutions Lead a team of ML Engineers, providing technical mentorship and guidance. Architect ML solutions that integrate seamlessly with business applications. Ensure models are explainable, auditable, and aligned with business goals. Drive best practices in MLOps, CI/CD, and model monitoring. Collaborating and Communicating Work closely with business stakeholders to understand problem statements and define ML-driven solutions. Collaborate with software engineers, data engineers, platform engineers and product managers to integrate ML models into production systems. Present technical concepts to non-technical stakeholders in an easy-to-understand manner. What We’re Looking For: Machine Learning Expertise Deep understanding of supervised and unsupervised learning, deep learning, and NLP techniques, and large language models (LLMs). Experience in training, fine-tuning, and deploying ML and LLM models at scale. Proficiency in ML frameworks such as TensorFlow, PyTorch, Scikit-learn etc. Production and Cloud Deployment Hands-on experience deploying models to AWS, GCP, or Azure. Understanding of MLOps, including CI/CD for ML models, model monitoring, and retraining pipelines. Experience working with Docker, Kubernetes, or serverless architectures is a plus. Data Handling Strong programming skills in Python. Proficiency in SQL and working with large-scale datasets. Familiarity with distributed computing frameworks like Spark or Dask is a plus. Leadership and Communication Ability to lead and mentor a team of ML Engineers and collaborate effectively across functions. Strong communication skills to explain technical concepts to business teams. Passion for staying updated with the latest advancements in ML and AI. Experience Needed: 6+ years of experience in machine learning engineering or related roles. Experience in deploying and managing ML and LLM models in production. Proven track record of working in cross-functional teams and leading ML projects.

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Software Engineer specializing in AI/ML/LLM/Data Science at Entra Solutions, a FinTech company within the mortgage Industry, you will play a crucial role in designing, developing, and deploying AI-driven solutions using cutting-edge technologies such as Machine Learning, NLP, and Large Language Models (LLMs). Your primary focus will be on building and optimizing retrieval-augmented generation (RAG) systems, LLM fine-tuning, and vector search technologies using Python. You will be responsible for developing scalable AI pipelines that ensure high performance and seamless integration with both cloud and on-premises environments. Additionally, this role will involve implementing MLOps best practices, optimizing AI model performance, and deploying intelligent applications. In this role, you will: - Develop, fine-tune, and deploy AI/ML models and LLM-based applications for real-world use cases. - Build and optimize retrieval-augmented generation (RAG) systems using Vector Databases such as ChromaDB, Pinecone, and FAISS. - Work on LLM fine-tuning, embeddings, and prompt engineering to enhance model performance. - Create end-to-end AI solutions with APIs using frameworks like FastAPI, Flask, or similar technologies. - Establish and maintain scalable data pipelines for training and inferencing AI models. - Deploy and manage models using MLOps best practices on cloud platforms like AWS or Azure. - Optimize AI model performance for low-latency inference and scalability. - Collaborate with cross-functional teams including Product, Engineering, and Data Science to integrate AI capabilities into applications. Qualifications: Must Have: - Proficiency in Python - Strong hands-on experience in AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, and OpenAI APIs. Good to Have: - Experience with LLM fine-tuning, embeddings, and transformers. - Knowledge of NLP, vector search technologies (ChromaDB, Pinecone, FAISS, Milvus). - Experience in building scalable AI models and data pipelines with Spark, Kafka, or Dask. - Familiarity with MLOps tools like Docker, Kubernetes, and CI/CD for AI models. - Hands-on experience in cloud-based AI deployment using platforms like AWS Lambda, SageMaker, GCP Vertex AI, or Azure ML. - Knowledge of prompt engineering, GPT models, or knowledge graphs. What's in it for you - Competitive Salary & Full Benefits Package - PTOs / Medical Insurance - Exposure to cutting-edge AI/LLM projects in an innovative environment - Career Growth Opportunities in AI/ML leadership - Collaborative & AI-driven work culture Entra Solutions is an equal employment opportunity employer, and we welcome applicants from diverse backgrounds. Join us and be a part of our dynamic team driving innovation in the FinTech industry.,

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a skilled professional, your primary responsibility will involve designing and implementing cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to tackle specific business challenges. You will be tasked with creating conversational AI agents and chatbots that provide seamless, human-like interactions, tailored to meet client needs. Additionally, you will develop and optimize Retrieval-Augmented Generation (RAG) models to enhance AI's ability to retrieve and synthesize pertinent information for accurate responses. Your expertise will be leveraged in managing data lakes, data warehouses (including Snowflake), and utilizing Databricks for large-scale data storage and processing. You are expected to have a thorough understanding of Machine Learning Operations (MLOps) practices and manage the complete lifecycle of machine learning projects, from data preprocessing to model deployment. You will play a crucial role in conducting advanced data analysis to extract actionable insights and support data-driven strategies across the organization. Collaborating with stakeholders from various departments, you will align AI initiatives with business requirements to develop scalable solutions. Additionally, you will mentor junior data scientists and engineers, encouraging innovation, skill enhancement, and continuous learning within the team. Staying updated on the latest advancements in AI and deep learning, you will experiment with new techniques to enhance model performance and drive business value. Effectively communicating findings to both technical and non-technical audiences through reports, dashboards, and visualizations will be part of your responsibilities. Furthermore, you will utilize cloud platforms like AWS Bedrock to deploy and manage AI models at scale, ensuring optimal performance and reliability. Your technical skills should include hands-on experience with PyTorch, TensorFlow, and scikit-learn for deep learning and machine learning tasks. Proficiency in Python or R programming, along with knowledge of big data technologies like Hadoop and Spark, is essential. Familiarity with MLOps, data handling tools such as pandas and dask, and cloud computing platforms like AWS is required. Skills in LLAMAIndex and LangChain frameworks, as well as data visualization tools like Tableau and Power BI, are desirable. To qualify for this role, you should hold a Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Specialization in deep learning, significant experience with PyTorch and TensorFlow, and familiarity with reinforcement learning, NLP, and generative models are expected. In addition to challenging work, you will enjoy a friendly work environment, work-life balance, company-sponsored medical insurance, a 5-day work week with flexible timings, frequent team outings, and yearly leave encashment. This exciting opportunity is based in Ahmedabad.,

Posted 6 days ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Vadodara

Work from Office

We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of whats possible in AI applications System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelors or Masters degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence.

Posted 6 days ago

Apply

0.0 - 2.0 years

8 - 12 Lacs

Bengaluru

Work from Office

We are looking for a passionate Python developer to join our team at Billions United. You will be responsible for developing and implementing high-quality software solutions, creating complex applications using cutting-edge programming features and frameworks, and collaborating with other teams to define, design, and ship new features. You will also be working on data engineering problems and building data pipelines. Objectives of this Role Develop, test, and maintain high-quality software using Python programming language. Participate in the entire software development lifecyclebuilding, testing, and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your Tasks Create large-scale data processing pipelines to support machine learning development. Participate in code reviews to ensure code quality and identify areas for improvement. Debug code and troubleshoot Python-related issues. Stay updated with emerging trends and technologies in Python development. Required Skills and Qualifications Freshers or 1+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering, or a related field. In-depth knowledge of Python stacks, frameworks, and tools such as NumPy, SciPy, Pandas, Dask, spaCy, NLTK, Scikit-learn, and PyTorch. Experience with front-end technologies like HTML, CSS, and JavaScript. Familiarity with SQL and NoSQL databases. Excellent problem-solving, communication, and collaboration skills. Preferred Skills and Qualifications Experience with frameworks such as Django, Flask, or Pyramid. Knowledge of data science and machine learning concepts and tools. Experience with cloud platforms such as AWS, Google Cloud, or Azure. Contributions to open-source Python projects or active involvement in the Python community.

Posted 6 days ago

Apply

0 years

0 Lacs

Haryana

On-site

Join us at Provectus to be a part of a team that is dedicated to building cutting-edge technology solutions that have a positive impact on society. Our company specializes in AI and ML technologies, cloud services, and data engineering, and we take pride in our ability to innovate and push the boundaries of what's possible. As an ML Engineer, you’ll be provided with all opportunities for development and growth. Let's work together to build a better future for everyone! Requirements: Comfortable with standard ML algorithms and underlying math. Strong hands-on experience with LLMs in production, RAG architecture, and agentic systems AWS Bedrock experience strongly preferred Practical experience with solving classification and regression tasks in general, feature engineering. Practical experience with ML models in production. Practical experience with one or more use cases from the following: NLP, LLMs, and Recommendation engines. Solid software engineering skills (i.e., ability to produce well-structured modules, not only notebook scripts). Python expertise, Docker. English level - strong Intermediate. Excellent communication and problem-solving skills. Will be a plus: Practical experience with cloud platforms (AWS stack is preferred, e.g. Amazon SageMaker, ECR, EMR, S3, AWS Lambda). Practical experience with deep learning models. Experience with taxonomies or ontologies. Practical experience with machine learning pipelines to orchestrate complicated workflows. Practical experience with Spark/Dask, Great Expectations. Responsibilities: Create ML models from scratch or improve existing models. Collaborate with the engineering team, data scientists, and product managers on production models. Develop experimentation roadmap. Set up a reproducible experimentation environment and maintain experimentation pipelines. Monitor and maintain ML models in production to ensure optimal performance. Write clear and comprehensive documentation for ML models, processes, and pipelines. Stay updated with the latest developments in ML and AI and propose innovative solutions.

Posted 1 week ago

Apply

2.0 years

1 - 5 Lacs

Ahmedabad

On-site

Experience: 2+ years in AI/ML, with hands-on development & leadership Key Responsibilities: ● Architect, develop, and deploy AI/ML solutions across various business domains. ● Research and implement cutting-edge deep learning, NLP, and computer vision models. ● Optimize AI models for performance, scalability, and real-time inference. ● Develop and manage data pipelines, model training, and inference workflows. ● Integrate AI solutions into microservices and APIs using scalable architectures. ● Lead AI-driven automation and decision-making systems. ● Ensure model monitoring, explainability, and continuous improvement in production. ● Collaborate with data engineering, software development, and DevOps teams. ● Stay updated with LLMs, transformers, federated learning, and AI ethics. ● Mentor AI engineers and drive AI research & development initiatives. Technical Requirements: ● Programming: Python (NumPy, Pandas, Scikit-learn). ● Deep Learning Frameworks: TensorFlow, PyTorch, JAX. ● NLP & LLMs: Hugging Face Transformers, BERT, GPT models, RAG, fine-tuning LLMs. ● Computer Vision: OpenCV, YOLO, Faster R-CNN, Vision Transformers (ViTs). ● Data Engineering: Spark, Dask, Apache Kafka, SQL/NoSQL databases. ● Cloud & MLOps: AWS/GCP/Azure, Kubernetes, Docker, CI/CD for ML pipelines. ● Optimization & Scaling: Model quantization, pruning, knowledge distillation. ● Big Data & Distributed Computing: Ray, Dask, TensorRT, ONNX. ● Security & Ethics: Responsible AI, Bias detection, Model explainability (SHAP, LIME). Preferred Qualifications: ● Experience with real-time AI applications, reinforcement learning, or edge AI. ● Contributions to AI research (publications, open-source contributions). ● Experience integrating AI with ERP, CRM, or enterprise solutions. Job Types: Full-time, Permanent Pay: ₹100,000.00 - ₹500,000.00 per year Schedule: Day shift Application Question(s): What is your current CTC? Experience: AI: 2 years (Required) Machine learning: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

6.0 - 11.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Location - Bangalore Skills and Qualifications : At least 6+ years' relevant experience would generally be expected to find the skills required for this role. 6+ years of being a practitioner in data engineering or a related field. Strong programming skills in Python, with experience in data manipulation and analysis libraries (e.g., Pandas, NumPy, Dask). Proficiency in SQL and experience with relational databases (e.g., Sybase, DB2, Snowflake, PostgreSQL, SQL Server). Experience with data warehousing concepts and technologies (e.g., dimensional modeling, star schema, data vault modeling, Kimball methodology, Inmon methodology, data lake design). Familiarity with ETL/ELT processes and tools (e.g., Informatica PowerCenter, IBM DataStage, Ab Initio) and open-source frameworks for data transformation (e.g., Apache Spark, Apache Airflow). Experience with message queues and streaming platforms (e.g., Kafka, RabbitMQ). Experience with version control systems (e.g., Git). Experience using Jupyter notebooks for data exploration, analysis, and visualization. Excellent communication and collaboration skills. Ability to work independently and as part of a geographically distributed team. Nice to have Understanding of any cloud-based application development & Dev Ops. Understanding of business intelligence tools - Tableau,PowerBI Understanding of Trade Lifecycle / Financial markets. If you are keen to join us, you will be part of an organization that values your contributions, recognizes your potential, and provides ample opportunities for growth. For more information, visit www.capco.com. Follow us on Twitter, Facebook, LinkedIn, and YouTube.

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities Advanced Model Development: Design and implement cutting-edge deep learning models using frameworks like PyTorch and TensorFlow to address specific business challenges. AI Agent and Chatbot Development: Create conversational AI agents capable of delivering seamless, human-like interactions, from foundational models to fine-tuning chatbots tailored to client needs. Retrieval-Augmented Generation (RAG): Develop and optimize RAG models, enhancing AI’s ability to retrieve and synthesize relevant information for accurate responses. Framework Expertise: Leverage LLAMAIndex and LangChain frameworks for building agent-driven applications that interact with large language models (LLMs). Data Infrastructure: Expertise in managing and utilizing data lakes, data warehouses (including Snowflake), and Databricks for large-scale data storage and processing. Machine Learning Operations (MLOps): Manage the full lifecycle of machine learning projects, from data preprocessing and feature engineering through model training, evaluation, and deployment, with a solid understanding of MLOps practices. Data Analysis & Insights: Conduct advanced data analysis to uncover actionable insights and support data-driven strategies across the organization. Cross-Functional Collaboration: Partner with cross-departmental stakeholders to align AI initiatives with business needs, developing scalable AI-driven solutions. Mentorship & Leadership: Guide junior data scientists and engineers, fostering innovation, skill growth, and continuous learning within the team. Research & Innovation: Stay at the forefront of AI and deep learning advancements, experimenting with new techniques to improve model performance and enhance business value. Reporting & Visualization: Develop and present reports, dashboards, and visualizations to effectively communicate findings to both technical and non-technical audiences. Cloud-Based AI Deployment: Utilize AWS Bedrock, including tools like Mistral and Anthropic Claude, to deploy and manage AI models at scale, ensuring optimal performance and reliability. Web Framework Integration: Build and deploy AI-powered applications using web frameworks such as Django and Flask, enabling seamless API integration and scalable backend services. Technical Skills Deep Learning & Machine Learning: Extensive hands-on experience with PyTorch, TensorFlow, and scikit-learn, along with large-scale data processing. Programming & Data Engineering: Strong programming skills in Python or R, with knowledge of big data technologies such as Hadoop, Spark, and advanced SQL. Data Infrastructure: Proficiency in managing and utilising data lakes, data warehouses, and Databricks for large-scale data processing and storage. MLOps & Data Handling: Familiar with MLOps and experienced in data handling tools like pandas and dask for efficient data manipulation. Cloud Computing: Advanced understanding of cloud platforms, especially AWS, for scalable AI/ML model deployment. AWS Bedrock: Expertise in deploying models on AWS Bedrock, with tools such as Mistral and Anthropic Claude. AI Frameworks: Skilled in LLAMAIndex and LangChain, with practical experience in agent-based applications. Data Visualization: Proficient in visualization tools like Tableau, Power BI for clear data presentation. Analytical & Communication Skills: Strong problem-solving abilities with the capability to convey complex technical concepts to diverse audiences. Team Collaboration & Leadership: Proven success in collaborative team environments, with experience in mentorship and leading innovative data science projects. Qualifications Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related field. Experience: Specializing in deep learning, including extensive experience in PyTorch and TensorFlow. Advanced AI Knowledge: Familiarity with reinforcement learning, NLP, and generative models. Benefits: Friendly Work Environment Work-Life Balance Company-Sponsored Medical Insurance 5-Day Work Week with Flexible Timings Frequent Team Outings Yearly Leave Encashment Location: Ahmedabad

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Engineering Experience Principal Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Principal Associate- Fullstack Engineering Job Description Generative AI Observability & Governance for ML Platform At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Principal Associate, Full Stack to join the Machine Learning Experience (MLX) team! As a Capital One Principal Associate, Full Stack, you'll be part of a team focusing on observability and model governance automation for cutting edge generative AI use cases. You will work on building solutions to collect metadata, metrics and insights from the large scale genAI platform. And build intelligent and smart solutions to derive deep insights into platform's use-cases performance and compliance with industry standards. You will contribute to building a system to do this for Capital One models, accelerating the move from fully trained models to deployable model artifacts ready to be used to fuel business decisioning and build an observability platform to monitor the models and platform components. The MLX team is at the forefront of how Capital One builds and deploys well-managed ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The ML experience we're creating today is the foundation that enables each of our businesses to deliver next-generation ML-driven products and services for our customers. What You’ll Do: Lead the design and implementation of observability tools and dashboards that provide actionable insights into platform performance and health. Leverage Generative AI models and fine tune them to enhance observability capabilities, such as anomaly detection, predictive analytics, and troubleshooting copilot. Build and deploy well-managed core APIs and SDKs for observability of LLMs and proprietary Gen-AI Foundation Models including training, pre-training, fine-tuning and prompting. Work with model and platform teams to build systems that ingest large amounts of model and feature metadata and runtime metrics to build an observability platform and to make governance decisions to ensure ethical use, data integrity, and compliance with industry standards for Gen-AI. Partner with product and design teams to develop and integrate advanced observability tools tailored to Gen-AI. Collaborate as part of a cross-functional Agile team,data scientists, ML engineers, and other stakeholders to understand requirements and translate them into scalable and maintainable solutions. Bring research mindset, lead Proof of concept to showcase capabilities of large language models in the realm of observability and governance which enables practical production solutions for improving platform users productivity. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. At least 4 years of experience designing and building data intensive solutions using distributed computing with deep understanding of microservices architecture. At least 4 years of experience programming with Python, Go, or Java Proficiency in observability tools such as Prometheus, Grafana, ELK Stack, or similar, with a focus on adapting them for Gen AI systems. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. Hands-on experience with Generative AI models and their application in observability or related areas. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. At least 2 years of experience with cloud platforms like AWS, Azure, or GCP. Preferred Qualifications: At least 4 years of experience building, scaling, and optimizing ML systems At least 3 years of experience in MLOps either using open source tools like MLFlow or commercial tools At least 2 Experience in developing applications using Generative AI i.e open source or commercial LLMs, and some experience in latest open source libraries such as LangChain, haystack and vector databases like open search, chroma and FAISS. Preferred prior experience in leveraging open source libraries for observability such as langfuse, phoenix, openInference, helicone etc. Contributed to open source libraries specifically GEN-AI and ML solutions Authored/co-authored a paper on a ML technique, model, or proof of concept Preferred experience with an industry recognized ML framework such as scikit-learn, PyTorch, Dask, Spark, or TensorFlow. Prior experience in NVIDIA GPU Telemetry and experience in CUDA Knowledge of data governance and compliance, particularly in the context of machine learning and AI systems. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Engineering Experience Principal Associate Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Principal Associate- Fullstack Engineering Job Description Generative AI Observability & Governance for ML Platform At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Principal Associate, Full Stack to join the Machine Learning Experience (MLX) team! As a Capital One Principal Associate, Full Stack, you'll be part of a team focusing on observability and model governance automation for cutting edge generative AI use cases. You will work on building solutions to collect metadata, metrics and insights from the large scale genAI platform. And build intelligent and smart solutions to derive deep insights into platform's use-cases performance and compliance with industry standards. You will contribute to building a system to do this for Capital One models, accelerating the move from fully trained models to deployable model artifacts ready to be used to fuel business decisioning and build an observability platform to monitor the models and platform components. The MLX team is at the forefront of how Capital One builds and deploys well-managed ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The ML experience we're creating today is the foundation that enables each of our businesses to deliver next-generation ML-driven products and services for our customers. What You’ll Do: Lead the design and implementation of observability tools and dashboards that provide actionable insights into platform performance and health. Leverage Generative AI models and fine tune them to enhance observability capabilities, such as anomaly detection, predictive analytics, and troubleshooting copilot. Build and deploy well-managed core APIs and SDKs for observability of LLMs and proprietary Gen-AI Foundation Models including training, pre-training, fine-tuning and prompting. Work with model and platform teams to build systems that ingest large amounts of model and feature metadata and runtime metrics to build an observability platform and to make governance decisions to ensure ethical use, data integrity, and compliance with industry standards for Gen-AI. Partner with product and design teams to develop and integrate advanced observability tools tailored to Gen-AI. Collaborate as part of a cross-functional Agile team,data scientists, ML engineers, and other stakeholders to understand requirements and translate them into scalable and maintainable solutions. Bring research mindset, lead Proof of concept to showcase capabilities of large language models in the realm of observability and governance which enables practical production solutions for improving platform users productivity. Basic Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. At least 4 years of experience designing and building data intensive solutions using distributed computing with deep understanding of microservices architecture. At least 4 years of experience programming with Python, Go, or Java Proficiency in observability tools such as Prometheus, Grafana, ELK Stack, or similar, with a focus on adapting them for Gen AI systems. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. Hands-on experience with Generative AI models and their application in observability or related areas. Excellent knowledge in Open Telemetry and priority experience in building SDKs and APIs. At least 2 years of experience with cloud platforms like AWS, Azure, or GCP. Preferred Qualifications: At least 4 years of experience building, scaling, and optimizing ML systems At least 3 years of experience in MLOps either using open source tools like MLFlow or commercial tools At least 2 Experience in developing applications using Generative AI i.e open source or commercial LLMs, and some experience in latest open source libraries such as LangChain, haystack and vector databases like open search, chroma and FAISS. Preferred prior experience in leveraging open source libraries for observability such as langfuse, phoenix, openInference, helicone etc. Contributed to open source libraries specifically GEN-AI and ML solutions Authored/co-authored a paper on a ML technique, model, or proof of concept Preferred experience with an industry recognized ML framework such as scikit-learn, PyTorch, Dask, Spark, or TensorFlow. Prior experience in NVIDIA GPU Telemetry and experience in CUDA Knowledge of data governance and compliance, particularly in the context of machine learning and AI systems. No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253034

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Analyst, Inclusive Innovation & Analytics, Center for Inclusive Growth Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. The Center for Inclusive Growth is the social impact hub at Mastercard. The organization seeks to ensure that the benefits of an expanding economy accrue to all segments of society. Through actionable research, impact data science, programmatic grants, stakeholder engagement and global partnerships, the Center advances equitable and sustainable economic growth and financial inclusion around the world. The Center’s work is at the heart of Mastercard’s objective to be a force for good in the world. Reporting to Vice President, Inclusive Innovation & Analytics, the Analyst, will 1) create and/or scale data, data science, and AI solutions, methodologies, products, and tools to advance inclusive growth and the field of impact data science, 2) work on the execution and implementation of key priorities to advance external and internal data for social strategies, and 3) manage the operations to ensure operational excellence across the Inclusive Innovation & Analytics team. Key Responsibilities Data Analysis & Insight Generation Design, develop, and scale data science and AI solutions, tools, and methodologies to support inclusive growth and impact data science. Analyze structured and unstructured datasets to uncover trends, patterns, and actionable insights related to economic inclusion, public policy, and social equity. Translate analytical findings into insights through compelling visualizations and dashboards that inform policy, program design, and strategic decision-making. Create dashboards, reports, and visualizations that communicate findings to both technical and non-technical audiences. Provide data-driven support for convenings involving philanthropy, government, private sector, and civil society partners. Data Integration & Operationalization Assist in building and maintaining data pipelines for ingesting and processing diverse data sources (e.g., open data, text, survey data). Ensure data quality, consistency, and compliance with privacy and ethical standards. Collaborate with data engineers and AI developers to support backend infrastructure and model deployment. Team Operations Manage team operations, meeting agendas, project management, and strategic follow-ups to ensure alignment with organizational goals. Lead internal reporting processes, including the preparation of dashboards, performance metrics, and impact reports. Support team budgeting, financial tracking, and process optimization. Support grantees and grants management as needed Develop briefs, talking points, and presentation materials for leadership and external engagements. Translate strategic objectives into actionable data initiatives and track progress against milestones. Coordinate key activities and priorities in the portfolio, working across teams at the Center and the business as applicable to facilitate collaboration and information sharing Support the revamp of the Measurement, Evaluation, and Learning frameworks and workstreams at the Center Provide administrative support as needed Manage ad-hoc projects, events organization Qualifications Bachelor’s degree in Data Science, Statistics, Computer Science, Public Policy, or a related field. 2–4 years of experience in data analysis, preferably in a mission-driven or interdisciplinary setting. Strong proficiency in Python and SQL; experience with data visualization tools (e.g., Tableau, Power BI, Looker, Plotly, Seaborn, D3.js). Familiarity with unstructured data processing and robust machine learning concepts. Excellent communication skills and ability to work across technical and non-technical teams. Technical Skills & Tools Data Wrangling & Processing Data cleaning, transformation, and normalization techniques Pandas, NumPy, Dask, Polars Regular expressions, JSON/XML parsing, web scraping (e.g., BeautifulSoup, Scrapy) Machine Learning & Modeling Scikit-learn, XGBoost, LightGBM Proficiency in supervised/unsupervised learning, clustering, classification, regression Familiarity with LLM workflows and tools like Hugging Face Transformers, LangChain (a plus) Visualization & Reporting Power BI, Tableau, Looker Python libraries: Matplotlib, Seaborn, Plotly, Altair Dashboarding tools: Streamlit, Dash Storytelling with data and stakeholder-ready reporting Cloud & Collaboration Tools Google Cloud Platform (BigQuery, Vertex AI), Microsoft Azure Git/GitHub, Jupyter Notebooks, VS Code Experience with APIs and data integration tools (e.g., Airflow, dbt) Ideal Candidate You are a curious and collaborative analyst who believes in the power of data to drive social change. You’re excited to work with cutting-edge tools while staying grounded in the real-world needs of communities and stakeholders. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-253034

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Gurugram, Haryana, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

30 - 40 Lacs

Cuttack, Odisha, India

Remote

Experience : 3.00 + years Salary : INR 3000000-4000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: DRIMCO GmbH) (*Note: This is a requirement for one of Uplers' client - AI-powered Industrial Bid Automation Company) What do you need for this opportunity? Must have skills required: Grafana, Graph, LLM, PLM systems, Prometheus, CI/CD, Dask, Kubeflow, MLFlow, or GCP, Python Programming, PyTorch, Ray, Scikit-learn, TensorFlow, Apache Spark, AWS, Azure, Docker, Kafka, Kubernetes, Machine Learning AI-powered Industrial Bid Automation Company is Looking for: We are driving the future of industrial automation and engineering by developing intelligent AI agents tailored for the manufacturing and automotive sectors. As part of our growing team, you’ll play a key role in building robust, scalable, and intelligent AI agentic products that redefine how complex engineering and requirements workflows are solved. Our highly skilled team includes researchers, technologists, entrepreneurs, and developers holding 15 patents and 20+ publications at prestigious scientific venues like ICML, ICLR, and AAAI. Founded in 2020, we are pioneering collaborative requirement assessment in industry. The combination of the founder’s deep industry expertise, an OEM partnership with Siemens, multi-patented AI technologies and VC backing positions us as the thought leader in the field of requirement intelligence. 🔍 Role Description Design, build, and optimize ML models for intelligent requirement understanding and automation. Develop scalable, production-grade AI pipelines and APIs. Own the deployment lifecycle, including model serving, monitoring, and continuous delivery. Collaborate with data engineers and product teams to ensure data integrity, performance, and scalability. Work on large-scale data processing and real-time pipelines. Contribute to DevOps practices such as containerization, CI/CD pipelines, and cloud deployments. Analyze and improve the efficiency and scalability of ML systems in production. Stay current with the latest AI/ML research and translate innovations into product enhancements. 🧠 What are we looking for 3+ years of experience in ML/AI engineering with shipped products. Proficient in Python (e.g., TensorFlow, PyTorch, scikit-learn). Strong software engineering practices: version control, testing, documentation. Experience with MLOps tools (e.g., MLflow, Kubeflow) and model deployment techniques. Familiarity with Docker, Kubernetes, CI/CD, and cloud platforms (AWS, Azure, or GCP). Experience working with large datasets, data wrangling, and scalable data pipelines (Apache Spark, Kafka, Ray, Dask, etc.). Good understanding of microservices, distributed systems and model performance optimization. Comfortable in a fast-paced startup environment; proactive and curious mindset. 🎯 Bonus Points: Experience with natural language processing, document understanding, or LLM (Large Language Model). Experience with Knowledge Graph technologies Experience with logging/monitoring tools (e.g., Prometheus, Grafana). Knowledge of requirement engineering or PLM systems. ✨ What we offer: Attractive Compensation Work on impactful AI products solving real industrial challenges. A collaborative, agile, and supportive team culture. Flexible work hours and location (hybrid/remote). How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies