Jobs
Interviews

1481 Matplotlib Jobs - Page 43

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Job Details: We are seeking a highly motivated and enthusiastic Junior Data Scientist with 2-3 years of experience to join our data science team. This role offers an exciting opportunity to contribute to both traditional Machine Learning projects for our commercial IoT platform (EDGE Live) and cutting-edge Generative AI initiatives. Position: Data Scientist Division & Department: Enabling Functions_Business Technology Group (BTG) Reporting To: Customer & Commercial Experience Products Leader Educational Qualifications Bachelor's degree in Mechanical, Computer Science, Data Science, Mathematics, or a related field. Experience 2-3 years of hands-on experience with machine learning Exposure to Generative AI concepts and techniques, such as Large Language Models (LLMs), RAG Architecture Experience in manufacturing and with an IoT platform is preferable Role And Responsibilities Key Objectives Machine Learning (ML) Assist in the development and implementation of machine learning models using frameworks such as TensorFlow, PyTorch, or scikit-learn. Help with Python development to integrate models with the overall application Monitor and evaluate model performance using appropriate metrics and techniques. Generative AI Build Gen AI-based tools for various business use cases by fine-tuning and adapting pre-trained generative models Support the exploration and experimentation with Generative AI models Research & Learning Stay up-to-date with the latest advancements and help with POCs Proactively research and propose new techniques and tools to improve our data science capabilities. Collaboration And Communication Work closely with cross-functional teams, including product managers, engineers, and business stakeholders, to understand requirements and deliver impactful solutions. Communicate findings, model performance, and technical concepts to both technical and non-technical audiences. Technical Competencies Programming: Proficiency in Python, with experience in libraries like numpy, pandas, and matplotlib for data manipulation and visualization. ML Frameworks: Experience with TensorFlow, PyTorch, or scikit-learn. Cloud & Deployment: Basic understanding of cloud platforms such as Databricks, Google Cloud Platform (GCP), or Microsoft Azure for model deployment. Data Processing & Evaluation: Knowledge of data preprocessing, feature engineering, and evaluation metrics such as accuracy, F1-score, and RMSE. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Zuddl is a modular platform for events and webinars that helps event marketers plan and execute events that drive growth. Event teams from global organizations like Microsoft, Google, ServiceNow, Zylo, Postman, TransPerfect and the United Nations trust Zuddl. Our modular approach to event management lets B2B marketers and conferences organizers decide which components they need to build the perfect event and scale their event program. Zuddl is an outcome-oriented platform with a focus on flexibility, and is more partner, less vendor.. FUNDING Zuddl being a part Y-Combinator 2020 batch has raised $13.35 million in Series A funding led by Alpha Wave Incubation and Qualcomm Ventures with participation from our existing investors GrowX ventures and Waveform Ventures. What You'll Do Prototype LLM-powered features using frameworks like LangChain, OpenAI Agents SDK to power content automation and intelligent workflows. Build and optimize Retrieval‑Augmented Generation (RAG) systems: document ingestion, chunking, embedding with vector DBs, and LLM integration. Work with vector databases to implement similarity search for use cases like intelligent Q&A, content recommendation, and context-aware responses. Experiment with prompt engineering and fine-tuning techniques Deploy LLM-based microservices and agents using Docker, K8s and CI/CD best practices. Analyze model metrics, document findings, and suggest improvements based on quantitative evaluations. Collaborate across functions—including product, design, and engineering—to align AI features with business needs and enhance user impact. Requirement Strong Python programming skills. Hands-on with LLMs—experience building, fine-tuning, or applying large language models. Familiarity with agentic AI frameworks, such as LangChain or OpenAI Agents SDK (or any relevant tool). Understanding of RAG architectures and prior implementation in projects or prototypes. Experience with vector databases like FAISS, Opensearch etc. Portfolio of LLM-based projects, demonstrated via GitHub, notebooks, or other coding samples. Good to Have Capability to build full‑stack web applications. Data analytics skills—data manipulation (Pandas/SQL), visualization (Matplotlib/Seaborn/Tableau), and statistical analysis. Worked with PostgreSQL, Metabase or relevant tools/databases. Strong ML fundamentals: regression, classification, clustering, deep learning techniques. Experience building recommender systems or hybrid ML solutions. Experience with deep learning frameworks: PyTorch, TensorFlow (or any relevant tool). Exposure to MLOps/DevOps tooling: Docker, Kubernetes, MLflow, Kubeflow (or any relevant tool). Why You Want To Work Here Opportunity to convert to a Full-Time Role, based on performance and organisational requirements after the end of the internship tenure. A culture built on trust, transparency, and integrity Ground floor opportunity at a fast-growing series A startup Competitive Stipend Work on AI-first features in an event-tech startup with global customers Thrive in a remote-first, empowering culture fueled by ownership and trust Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Dear Associates Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family Hiring For : Python AI ML, MlOPs Must Have : Spark, Hadoop,PyTorch, TensorFlow,Matplotlib, Seaborn, Tableau, Power BI,scikit-learn, TensorFlow, XGBoost,AWS,Azure , AWS, Databricks,Pyspark, Python,SQL, Snowflake, Experience: 5+ yrs Location : Mumbai / Pune If interested kindly fill the details and send your resume at nitu.sadhukhan@tcs.com . Note: only Eligible candidates with Relevant experience will be contacted further Name Contact No: Email id: Current Location: Preferred Location: Highest Qualification (Part time / Correspondence is not Eligible) : Year of Passing (Highest Qualification): Total Experience: Relevant Experience : Current Organization: Notice Period: Current CTC: Expected CTC: Pan Number : Gap in years if any (Education / Career): Updated CV attached (Yes / No) ? IF attended any interview with TCS in last 6 months : Available For walk In drive on 14th June _Pune : Thanks & Regards, Nitu Sadhukhan Talent Acquisition Group Tata Consultancy Services Lets Connect : linkedin.com/in/nitu-sadhukhan-16a580179 Nitu.sadhukhan@tcs.com Show more Show less

Posted 2 months ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

We are seeking a dynamic and experienced AI Trainer with expertise in Machine Learning, Deep Learning, and Generative AI including LLMs (Large Language Models) . The candidate will train students and professionals in real-world applications of AI/ML as well as the latest trends in GenAI such as ChatGPT, LangChain, Hugging Face Transformers, Prompt Engineering, and RAG (Retrieval-Augmented Generation) . Key Responsibilities: Deliver hands-on training sessions in AI, ML, Deep Learning , and Generative AI . Teach the fundamentals and implementation of algorithms like regression, classification, clustering, decision trees, neural networks, CNNs, and RNNs. Train students in LLMs (e.g., OpenAI GPT, Meta LLaMA, Google Gemini) and prompt engineering techniques . LangChain Hugging Face Transformers LLM APIs (OpenAI, Cohere, Anthropic, Google Vertex AI) Vector databases (FAISS, Pinecone, Weaviate) RAG pipelines Design and evaluate practical labs and capstone projects (e.g., chatbot, image generator, smart assistants). Keep training materials updated with latest industry developments and tools. Provide mentorship for student projects and support during hackathons or workshops. Required Skills: AI/ML Core: Python, NumPy, pandas, scikit-learn, Matplotlib, Jupyter Good knowledge in Machine Learning and Deep Learning algorithms Deep Learning: TensorFlow / Keras / PyTorch OpenCV (for Computer Vision), NLTK/spaCy (for NLP) Generative AI & LLM: Prompt engineering (zero-shot, few-shot, chain-of-thought) LangChain and LlamaIndex (RAG frameworks) Hugging Face Transformers OpenAI API, Cohere, Anthropic, Google Gemini, etc. Vector DBs like FAISS, ChromaDB, Pinecone, Weaviate Streamlit, Gradio (for app prototyping) Qualifications: B.E/B.Tech/M.Tech/M.Sc in AI, Data Science, Computer Science, or related Practical experience in AI/ML, LLMs, or GenAI projects Previous experience as a developer/trainer/corporate instructor is a plus Salary / Remuneration: ₹30,000 – ₹75,000/month based on experience and engagement type Job Type: Full-time Pay: ₹30,000.00 - ₹75,000.00 per month Schedule: Day shift Application Question(s): How many years of experience you have ? Can you commute to Kakkanad, Kochi ? What is your expected Salary ? Work Location: In person

Posted 2 months ago

Apply

10.0 years

0 Lacs

Chennai

On-site

As a Principal AI Engineer, he will be part of a high performing team working on exciting opportunities in AI within Ford Credit. We are looking for a highly skilled, technical, hands-on AI engineer with a solid background in building end-to-end AI applications, exhibiting a strong aptitude for learning and keeping up with the latest advances in AIDevelop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . He should be able to lead technical discussion and technical mentor for the team. Professional Experience: Potential candidates should possess 10+ years of strong working experience in AI. BE/MSc/ MTech /ME/PhD (Computer Science/Maths, Statistics). Possess a strong analytical mindset and be very comfortable with data. Experience with handling both relational and non-relational data. Hands-on experience with analytics methods (descriptive/predictive/prescriptive), Statistical Analysis, Probability and Data Visualization tools (Python-Matplotlib, Seaborn). Background of Software engineering with excellent Data Science working experience. Technical Experience: Develop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . Excellent in communication and presentation skills. Ability to do stakeholder management. Ability to collaborate with a cross-functional team involving data engineers, solution architects, application engineers, and product teams across time zones to develop data and model pipelines. Ability to drive and mentor the team technically, leveraging cutting edge AI and Machine Learning principles and develop production-ready AI solutions. Mentor the team of data scientists and assume responsible for the delivery of use cases. Ability to scope the problem statement, data preparation, training and making the AI model production ready. Work with business partners to understand the problem statement, translate the same into analytical problem. Ability to manipulate structured and unstructured data. Develop, test and improve existing machine learning models. Analyse large and complex data sets to derive valuable insights. Research and implement best practices to enhance existing machine learning infrastructure. Develop prototypes for future exploration. Design and evaluate approaches for handling large volume of real data streams. Ability to determine appropriate analytical methods to be used. Understanding of statistics and hypothesis testing.

Posted 2 months ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

16.0 years

1 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 months ago

Apply

2.0 years

0 Lacs

Surat, Gujarat, India

On-site

We’re hiring a Python Developer with a strong understanding of Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying scalable AI/ML solutions and Python-based backend applications. Note: Only Surat-Gujarat based candidate apply for this job. Role Expectations : Develop and maintain robust Python code for backend and AI/ML applications. Design and implement machine learning models for prediction, classification, recommendation, etc. Work on data preprocessing, feature engineering, model training, evaluation, and optimization. Collaborate with the frontend team, data scientists, and DevOps engineers to deploy ML models to production. Integrate AI/ML models into web or mobile applications. Write clean, efficient, and well-documented code. Stay updated with the latest trends and advancements in Python and ML. Soft skills : Problem-Solving Analytical Thinking Collaboration & Teamwork Time Management Attention to Detail Required Skills: Proficiency in Python and Python-based libraries (NumPy, Pandas, Scikit-learn, etc.). Hands-on experience with AI/ML model development and deployment. Familiarity with TensorFlow, Keras, or PyTorch. Strong knowledge of data structures, algorithms, and object-oriented programming. Experience with REST APIs and Flask/Django frameworks. Basic knowledge of data visualization tools like Matplotlib or Seaborn. Understanding of version control tools (Git/GitHub). Good to Have: Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of NLP, computer vision, or deep learning models. Experience working with large datasets and databases (SQL, MongoDB). Familiarity with containerization tools like Docker. Our Story : We’re chasing a world where tech doesn’t frustrate—it flows like a river carving its own path. Every line of code we hammer out is a brick in a future where tools don’t just function—they vanish into the background, so intuitive you barely notice them working their magic. We craft software and apps that tackle real problems head-on, not just pile up shiny features for the sake of a spec sheet. It starts with listening—really listening—to the headaches, the what-ifs, and the crazy ambitions others might shrug off. Then we build smart: solutions that cut through the clutter with surgical precision, designed to fit like a glove and run like a rocket. Unlock The Advantage : 5-Days a week 12 paid leave + public holidays Training and Development : Certifications Employee engagement activities: awards, community gathering Good Infrastructure & Onsite opportunity Flexible working culture Experience: 2-3 Years Job Type: Full Time (On-site) Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

What You'll be doing: Dashboard Development: Design, develop, and maintain interactive and visually compelling dashboards using Power BI. Implement DAX queries and data models to support business intelligence needs. Optimize performance and usability of dashboards for various stakeholders. Python & Streamlit Applications: Build and deploy lightweight data applications using Streamlit for internal and external users. Integrate Python libraries (e.g., Pandas, NumPy, Plotly, Matplotlib) for data processing and visualization. Data Integration & Retrieval: Connect to and retrieve data from RESTful APIs, cloud storage (e.g., Azure Data Lake, Cognite Data Fusion, and SQL/NoSQL databases. Automate data ingestion pipelines and ensure data quality and consistency. Collaboration & Reporting: Work closely with business analysts, data engineers, and stakeholders to gather requirements and deliver insights. Present findings and recommendations through reports, dashboards, and presentations. Requirements: Bachelor’s or master’s degree in computer science, Data Science, Information Systems, or a related field. 3+ years of experience in data analytics or business intelligence roles. Proficiency in Power BI, including DAX, Power Query, and data modeling. Strong Python programming skills, especially with Streamlit, Pandas, and API integration. Experience with REST APIs, JSON/XML parsing, and cloud data platforms (Azure, AWS, or GCP). Familiarity with version control systems like Git. Excellent problem-solving, communication, and analytical skills. Preferred Qualifications: Experience with CI/CD pipelines for data applications. Knowledge of DevOps practices and containerization (Docker). Exposure to machine learning or statistical modeling is a plus. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

India

Remote

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Scientist and help fast-growing startups transform data into actionable insights, predictive models, and intelligent decision-making tools. You’ll work on real-world data challenges across domains like marketing, finance, healthtech, and AI—with full flexibility to work remotely and choose the engagements that best fit your goals. Role Overview As a Data Scientist, you will: Extract Insights from Data: Analyze complex datasets to uncover trends, patterns, and opportunities. Build Predictive Models: Develop, validate, and deploy machine learning models that solve core business problems. Communicate Clearly: Work with cross-functional teams to present findings and deliver data-driven recommendations. What You’ll Do Analytics & Modeling: Explore, clean, and analyze structured and unstructured data using statistical and ML techniques. Build predictive and classification models using tools like scikit-learn, XGBoost, TensorFlow, or PyTorch. Conduct A/B testing, customer segmentation, forecasting, and anomaly detection. Data Storytelling & Collaboration: Present complex findings in a clear, actionable way using data visualizations (e.g., Tableau, Power BI, Matplotlib). Work with product, marketing, and engineering teams to integrate models into applications or workflows. Technical Requirements & Skills Experience: 3+ years in data science, analytics, or a related field. Programming: Proficient in Python (preferred), R, and SQL. ML Frameworks: Experience with scikit-learn, TensorFlow, PyTorch, or similar tools. Data Handling: Strong understanding of data preprocessing, feature engineering, and model evaluation. Visualization: Familiar with visualization tools like Matplotlib, Seaborn, Plotly, Tableau, or Power BI. Bonus: Experience working with large datasets, cloud platforms (AWS/GCP), or MLOps practices. What We’re Looking For A data-driven thinker who can go beyond numbers to tell meaningful stories. A freelancer who enjoys solving real business problems using machine learning and advanced analytics. A strong communicator with the ability to simplify complex models for stakeholders. Why Join Us? Immediate Impact: Work on projects that directly influence product, growth, and strategy. Remote & Flexible: Choose your working hours and project commitments. Future Opportunities: BeGig will continue matching you with data science roles aligned to your strengths. Dynamic Network: Collaborate with startups building data-first, insight-driven products. Ready to turn data into decisions? Apply now to become a key Data Scientist for our client and a valued member of the BeGig network! Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As a Data Scientist, you will work with a cross-functional team to identify business challenges and provide data-driven insights. You'll be responsible for data exploration, feature engineering, model development, and production deployment of machine learning solutions. We are seeking someone passionate about working with diverse datasets and applying machine learning techniques to deliver meaningful results. Key Responsibilities: • Collaborate with internal teams to understand business requirements and translate them into data-driven solutions. • Perform exploratory data analysis, data cleaning, and transformation. • Develop and deploy machine learning models and algorithms for predictive and prescriptive analysis. • Conduct A/B testing and evaluate the impact of model implementations. • Generate data visualizations and reports to communicate insights to stakeholders. • Stay updated with the latest developments in data science, machine learning, and industry trends. Qualifications: • Bachelor's degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. • 4+ years of experience working as a Data Scientist or in a similar role. • Strong proficiency in Python or R, and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch. • Knowledge of data processing frameworks, e.g., Pandas, NumPy, Spark. • Experience with data visualization tools like Tableau, Power BI, or Matplotlib. • Ability to query databases using SQL and familiarity with relational databases. • Familiarity with cloud platforms such as AWS, Azure, or GCP is a plus. • Strong analytical, problem-solving, and communication skills. • Fluent in Spanish and/or Portuguese, with good English proficiency. Must-Have Skills: Excellent English Communication Skills Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Join us as a "Chief Control Office" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences. You may be assessed on the key critical skills relevant for success in role, such as experience with MS office, SQL, Alteryx, Power Tools, Python as well as job-specific skillsets. To be successful as an "Analyst", you should have experience with: Basic/ Essential Qualifications Graduate in any discipline Experience in Controls, Governance, Reporting and Risk Management preferably in a financial services organisation Proficient in MS Office – PPT, Excel, Work & Visio Proficient in SQL, Alteryx and Python Generating Data Insights and Dashboards from large and diverse data sets Excellent experience on Tableau, Alteryx, MS Office (i.e. Advance Excel, PowerPoint) Automation skills using VBA, PowerQuery, PowerApps, etc. Experience in using ETL tools. Good understanding of Risk and Control Excellent communication skills (Verbal and Written) Good understanding of governance and control frameworks and processes Highly motivated, business-focussed and forward thinking. Experience in senior stakeholder management. Ability to manage relationships across multiple disciplines Desirable Skillsets/ Good To Have Experience in data crunching/ analysis including automation Experience in handling RDBMS (i.e. SQL/Oracle) Experience in Python, Data Science and Data Analytics Tools and Techniques e.g. MatPlotLib, Data Wrangling, Low Code/No Code environment development preferably in large bank on actual use cases Understanding of Data Management Principles and data governance Design and managing SharePoints Financial Services experience This role will be based out of Pune. Purpose of the role To design, develop and consult on the bank’s internal controls framework and supporting policies and standards across the organisation, ensuring it is robust, effective, and aligned to the bank’s overall strategy and risk appetite. Accountabilities Identification and analysis of emerging and evolving risks across functions to understand their potential impact, and likelihood. Communication of the purpose, structure, and importance of the control framework to all relevant stakeholders, including senior management and audit. Support to the development and implementation of the bank's internal controls framework and principles tailored to the banks specific needs and risk profile including design, monitoring, and reporting initiatives . Monitoring and maintenance of the control's frameworks, to ensure compliance and adjust and update as internal and external requirements change. Embedment of the control framework across the bank through cross collaboration, training sessions and awareness campaigns which fosters a culture of knowledge sharing and improvement in risk management and the importance of internal control effectiveness. Analyst Expectations To meet the needs of stakeholders/ customers through specialist advice and support Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. Likely to have responsibility for specific processes within a team They may lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. They supervise a team, allocate work requirements and coordinate team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they manage own workload, take responsibility for the implementation of systems and processes within own work area and participate on projects broader than direct team. Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. Check work of colleagues within team to meet internal and stakeholder requirements. Provide specialist advice and support pertaining to own work area. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how all teams in area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative / operational expertise. Make judgements based on practise and previous experience. Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day to day administrative requirements. Build relationships with stakeholders/ customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Join us as an "BA4 - Control Data Analytics and Reporting" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. To be successful as an "BA4 - Control Data Analytics and Reporting", you should have experience with: Basic/ Essential Qualifications Graduate in any discipline. Experience in Controls, Governance, Reporting and Risk Management preferably in a financial services organization. Proficient in MS Office – PPT, Excel, Work & Visio. Proficient in SQL, Tableau and Python. Generating Data Insights and Dashboards from large and diverse data sets. Excellent experience on Tableau, Alteryx, MS Office (i.e. Advance Excel, PowerPoint). Automation skills using VBA, Power Query, PowerApps, etc. Experience in using ETL tools. Good understanding of Risk and Control. Excellent communication skills (Verbal and Written). Good understanding of governance and control frameworks and processes. Highly motivated, business-focused and forward thinking. Experience in senior stakeholder management. Ability to manage relationships across multiple disciplines. Self-driven and proactively participates in team initiatives. Demonstrated initiative in identifying and resolving problems. Desirable Skillsets/ Good To Have Experience in data crunching/ analysis including automation. Experience in handling RDBMS (i.e. SQL/Oracle). Experience in Python, Data Science and Data Analytics Tools and Techniques e.g. MatPlotLib, Data Wrangling, Low Code/No Code environment development preferably in large bank on actual use cases. Understanding of Data Management Principles and data governance. Design and managing SharePoint. Financial Services experience. Location: Noida. You may be assessed on the key critical skills relevant for success in role, such as experience with MS office, MS Power Platforms, Python, Tableau as well as job-specific skillsets. Additional experience in Alteryx would be an added advantage. Purpose of the role To design, develop and consult on the bank’s internal controls framework and supporting policies and standards across the organisation, ensuring it is robust, effective, and aligned to the bank’s overall strategy and risk appetite. Accountabilities Identification and analysis of emerging and evolving risks across functions to understand their potential impact, and likelihood. Communication of the purpose, structure, and importance of the control framework to all relevant stakeholders, including senior management and audit. Support to the development and implementation of the bank's internal controls framework and principles tailored to the banks specific needs and risk profile including design, monitoring, and reporting initiatives . Monitoring and maintenance of the control's frameworks, to ensure compliance and adjust and update as internal and external requirements change. Embedment of the control framework across the bank through cross collaboration, training sessions and awareness campaigns which fosters a culture of knowledge sharing and improvement in risk management and the importance of internal control effectiveness. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Function: Data Science Job: Machine Learning Engineer Position: Senior Immediate manager (N+1 Job title and name): AI Manager Additional reporting line to: Global VP Engineering Position location: Mumbai, Pune, Bangalore, Hyderabad, Noida. 1. Purpose of the Job – A simple statement to identify clearly the objective of the job. The Senior Machine Learning Engineer is responsible for designing, implementing, and deploying scalable and efficient machine learning algorithms to solve complex business problems. The Machine Learning Engineer is also responsible of the lifecycle of models, once deployed in production environments, through monitoring performance and model evolution. The position is highly technical and requires an ability to collaborate with multiple technical and non-technical profiles (data scientists, data engineers, data analysts, product owners, business experts), and actively take part in a large data science community. 2. Organization chart – Indicate schematically the position of the job within the organization. It is sufficient to indicate one hierarchical level above (including possible functional boss) and, if applicable, one below the position. In the horizontal direction, the other jobs reporting to the same superior should be indicated. A Machine Learning Engineer reports to the AI Manager who reports to the Global VP Engineering. 3. Key Responsibilities and Expected Deliverables– This details what actually needs to be done; the duties and expected outcomes. Managing the lifecycle of machine learning models Develop and implement machine learning models to solve complex business problems. Ensure that models are accurate, efficient, reliable, and scalable. Deploy machine learning models to production environments, ensuring that models are integrated with software systems. Monitor machine learning models in production, ensuring that models are performing as expected and that any errors or performance issues are identified and resolved quickly. Maintain machine learning models over time. This includes updating models as new data becomes available, retraining models to improve performance, and retiring models that are no longer effective. Develop and implement policies and procedures for ensuring the ethical and responsible use of machine learning models. This includes addressing issues related to bias, fairness, transparency, and accountability. Development of data science assets Identify cross use cases data science needs that could be mutualised in a reusable piece of code. Design, contribute and participate in the implementation of python libraries answering a data science transversal need that can be reused in several projects. Maintain existing data science assets (timeseries forecasting asset, model monitoring asset) Create documentation and knowledge base on data science assets to ensure a good understanding from users. Participate to asset demos to showcase new features to users. Be an active member of the Sodexo Data Science Community Participate to the definition and maintenance of engineering standards and set of good practices around machine learning. Participate to data science team meeting and regularly share knowledge, ask questions, and learn from others. Mentor and guide junior machine learning engineers and data scientists. Participate to internal or external relevant conferences and meet ups. Continuous Improvements Stay up to date with the latest developments in the field: read research papers, attend conferences, and participate in trainings to expand their knowledge and skills. Identify and evaluate new technologies and tools that can improve the efficiency and effectiveness of machine learning projects. Propose and implement optimizations for current machine learning workflows and systems. Proactively identify areas of improvement within the pipelines. Make sure that created code is compliant with our set of engineering standards. Collaboration with other data experts (Data Engineers, Platform Engineers, and Data Analysts) Participate to pull requests reviews coming from other team members. Ask for review and comments when submitting their own work. Actively participate to the day-to-day life of the project (Agile rituals), the data science team (DS meeting) and the rest of the Global Engineering team 4. Education & Experience – Indicate the skills, knowledge and experience that the job holder should require to conduct the role effectively Engineering Master’s degree or PhD in Data Science, Statistics, Mathematics, or related fields 5 years+ experience in a Data Scientist / Machine Learning Engineer role into large corporate organizations Experience of working with ML models in a cloud ecosystem Statistics & Machine Learning Statistics : Strong understanding of statistical analysis and modelling techniques (e.g., regression analysis, hypothesis testing, time series analysis) Classical ML : Very strong knowledge in classical ML algorithms for regression & classification, supervised and unsupervised machine learning, both theoretical and practical (e.g. using scikit-learn, xgboost) ML niche: Expertise in at least one of the following ML specialisations: Timeseries forecasting / Natural Language Processing / Computer Vision Deep Learning: Good knowledge of Deep Learning fundamentals (CNN, RNN, transformer architecture, attention mechanism, …) and one of the deep learning frameworks (pytorch, tensorflow, keras) Generative AI: Good understanding of Generative AI specificities and previous experience in working with Large Language Models is a plus (e.g. with openai, langchain) MLOps Model strategy : Expertise in designing, implementing, and testing machine learning strategies. Model integration : Very strong skills in integrating a machine learning algorithm in a data science application in production. Model performance: Deep understanding of model performance evaluation metrics and existing libraries (e.g., scikit-learn, evidently) Model deployment: Experience in deploying and managing machine learning models in production either using specific cloud platform, model serving frameworks, or containerization. Model monitoring : Experience with model performance monitoring tools is a plus (Grafana, Prometheus) Software Engineering Python: Very strong coding skills in Python including modularity, OOP, data & config manipulation frameworks (e.g., pandas, pydantic) etc. Python ecosystem: Strong knowledge of tooling in Python ecosystem such as dependency management tooling (venv, poetry), documentation frameworks (e.g. sphinx, mkdocs, jupyter-book), testing frameworks (unittest, pytest) Software engineering practices: Experience in putting in place good software engineering practices such as design patterns, testing (unit, integration), clean code, code formatting etc. Debugging : Ability to troubleshoot and debug issues within machine learning pipelines Data Science Experimentation and Analytics Data Visualization : Knowledge of data visualization tools such as plotly, seaborn, matplotlib, etc. to visualise, interpret and communicate the results of machine learning models to stakeholders. Basic knowledge of PowerBI is a plus Data Cleaning : Experience with data cleaning and preprocessing techniques such as feature scaling, dimensionality reduction, and outlier detection (e.g. with pandas, scikit-learn). Data Science Experiments : Understanding of experimental design and A/B testing methodologies Data Processing: Databricks/Spark : Basic knowledge of PySpark for big data processing Databases : Basic knowledge of SQL to query data in internal systems Data Formats : Familiarity with different data storage formats such as Parquet and Delta DevOps Azure DevOps : Experience using a DevOps platform such as Azure DevOps for using Boards, Repositories, Pipelines Git: Experience working with code versioning (git), branch strategies, and collaborative work with pull requests. Proficient with the most basic git commands. CI / CD : Experience in implementing/maintaining pipelines for continuous integration (including execution of testing strategy) and continuous deployment is preferable. Cloud Platform: Azure Cloud : Previous experience with services like Azure Machine Learning Services and/or Azure Databricks on Azure is preferable. Soft skills Strong analytical and problem-solving skills, with attention to detail Excellent verbal and written communication and pedagogical skills with technical and non-technical teams Excellent teamwork and collaboration skills Adaptability and reactivity to new technologies, tools, and techniques Fluent in English 5. Competencies – Indicate which of the Sodexo core competencies and any professional competencies that the role requires Communication & Collaboration Adaptability & Agility Analytical & technical skills Innovation & Change Rigorous Problem Solving & Troubleshooting Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

We are seeking a passionate and dedicated machine learning intern to join our team. As an intern, you will work closely with experienced data scientists and engineers to develop, test, and deploy machine learning models. This is an excellent opportunity to gain hands-on experience with cutting-edge technologies and contribute to real-world projects. Selected Intern's Day-to-day Responsibilities Include Collaborate with the team to collect, preprocess, and analyze datasets. Develop and train machine learning models for specific use cases. Evaluate model performance using appropriate metrics and improve accuracy and efficiency. Assist in deploying ML models into production environments. Document processes, findings, and insights throughout the project lifecycle. Stay updated on the latest trends and advancements in machine learning and AI. Qualifications Currently pursuing or recently completed a degree in computer science, data science, Machine Learning, or a related field. Strong knowledge of programming languages such as Python (preferred) or R. Familiarity with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. Basic understanding of supervised and unsupervised learning, NLP, or computer vision. Hands-on experience with data preprocessing and visualization libraries (e.g., Pandas, NumPy, Matplotlib). Problem-solving mindset and eagerness to learn new technologies. Preferred Skills Experience with SQL and working with databases. Knowledge of cloud platforms like AWS, Google Cloud, or Azure. Understanding of version control systems (e.g., Git). About Company: LineupX is a B2B company that streamlines talent acquisition in just 4 easy steps. Our solution integrates human and machine intelligence to deliver an unparalleled experience for organizations and individuals alike. The industry we aim to revolutionize is currently valued at $400 billion globally. LineupX utilizes technology-driven recruitment solutions, employing machine learning and human expertise to redefine global talent acquisition practices. LineupX is an early-stage startup with promising traction, supported by angel investments, and generating revenue. We have been recognized by SINE IIT Bombay as a high-potential startup and endorsed by Canqbate50, a Canada-based startup program aimed at identifying and funding the top 50 startups from India to establish operations in Canada. Show more Show less

Posted 2 months ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 2 months ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 4+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies