Home
Jobs
Companies
Resume

817 Matplotlib Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

16.0 years

1 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Position: Data Analyst Intern (Full-Time) Company: Lead India Location: Remote Stipend: ₹25,000/month Duration: 1–3 months (Full-Time Internship) About Lead India: Lead India is a forward-thinking technology company that helps businesses make smarter decisions through data. We provide meaningful internship opportunities for emerging professionals to gain real-world experience in data analysis, reporting, and decision-making. Role Overview: We are seeking a Data Analyst Intern to support our data and product teams in gathering, analyzing, and visualizing business data. This internship is ideal for individuals who enjoy working with numbers, identifying trends, and turning data into actionable insights. Key Responsibilities: Analyze large datasets to uncover patterns, trends, and insights Create dashboards and reports using tools like Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and analysis Assist in data cleaning, preprocessing, and validation Collaborate with cross-functional teams to support data-driven decisions Document findings and present insights to stakeholders Skills We're Looking For: Strong analytical and problem-solving skills Basic knowledge of SQL and data visualization tools (Power BI, Tableau, or Excel) Familiarity with Python for data analysis (pandas, matplotlib) is a plus Good communication and presentation skills Detail-oriented with a willingness to learn and grow What You’ll Gain: ₹25,000/month stipend Real-world experience in data analysis and reporting Mentorship from experienced analysts and developers Remote-first, collaborative work environment Potential for a Pre-Placement Offer (PPO) based on performance Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

We’re hiring a Python Developer with a strong understanding of Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying scalable AI/ML solutions and Python-based backend applications. Note: Only Surat-Gujarat based candidate apply for this job. Role Expectations : Develop and maintain robust Python code for backend and AI/ML applications. Design and implement machine learning models for prediction, classification, recommendation, etc. Work on data preprocessing, feature engineering, model training, evaluation, and optimization. Collaborate with the frontend team, data scientists, and DevOps engineers to deploy ML models to production. Integrate AI/ML models into web or mobile applications. Write clean, efficient, and well-documented code. Stay updated with the latest trends and advancements in Python and ML. Soft skills : Problem-Solving Analytical Thinking Collaboration & Teamwork Time Management Attention to Detail Required Skills: Proficiency in Python and Python-based libraries (NumPy, Pandas, Scikit-learn, etc.). Hands-on experience with AI/ML model development and deployment. Familiarity with TensorFlow, Keras, or PyTorch. Strong knowledge of data structures, algorithms, and object-oriented programming. Experience with REST APIs and Flask/Django frameworks. Basic knowledge of data visualization tools like Matplotlib or Seaborn. Understanding of version control tools (Git/GitHub). Good to Have: Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of NLP, computer vision, or deep learning models. Experience working with large datasets and databases (SQL, MongoDB). Familiarity with containerization tools like Docker. Our Story : We’re chasing a world where tech doesn’t frustrate—it flows like a river carving its own path. Every line of code we hammer out is a brick in a future where tools don’t just function—they vanish into the background, so intuitive you barely notice them working their magic. We craft software and apps that tackle real problems head-on, not just pile up shiny features for the sake of a spec sheet. It starts with listening—really listening—to the headaches, the what-ifs, and the crazy ambitions others might shrug off. Then we build smart: solutions that cut through the clutter with surgical precision, designed to fit like a glove and run like a rocket. Unlock The Advantage : 5-Days a week 12 paid leave + public holidays Training and Development : Certifications Employee engagement activities: awards, community gathering Good Infrastructure & Onsite opportunity Flexible working culture Experience: 2-3 Years Job Type: Full Time (On-site) Show more Show less

Posted 1 week ago

Apply

0.0 - 1.0 years

0 Lacs

Kazhakoottam, Thiruvananthapuram, Kerala

On-site

Indeed logo

Urgent Hiring: Offline Trainers (Full-Time) – Data Science Academy, Kerala Location: Thiruvananthapuram & Kochi, Kerala (Offline / In-Person Only) Job Type: Full-Time Join Date: Immediate About Us: Data Science Academy is Kerala’s first dedicated AI and Data Science training institute , committed to shaping the next generation of tech professionals. We are rapidly expanding and seeking passionate educators to join our mission. We’re Hiring Trainers With Expertise in ANY of the Following Areas: Microsoft Excel (Advanced) Database Python Programming Power BI Python Libraries (NumPy, Pandas, Matplotlib, etc.) Machine Learning Deep Learning Generative AI Cloud Computing (AWS / Azure / GCP) Who Can Apply: Industry professionals with teaching flair Academic trainers with practical exposure Freelancers looking for consistent offline assignments Freshers with strong domain knowledge and communication skills Requirements: Strong subject expertise in at least one or more of the areas listed Excellent communication and presentation skills Must be willing to train students in an offline classroom setting in Kerala (primarily at our Thiruvananthapuram campus) Availability for immediate joining is preferred What We Offer: Chance to be part of Kerala’s pioneering AI education brand Opportunity to mentor future data scientists and AI engineers Career growth in training, curriculum development, and industry exposure Job Type: Full-time Education: Bachelor's (Preferred) Experience: total work: 1 year (Preferred) Python: 1 year (Preferred) Training & development: 1 year (Preferred) Language: English (Preferred) Work Location: In person

Posted 1 week ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

What You'll be doing: Dashboard Development: Design, develop, and maintain interactive and visually compelling dashboards using Power BI. Implement DAX queries and data models to support business intelligence needs. Optimize performance and usability of dashboards for various stakeholders. Python & Streamlit Applications: Build and deploy lightweight data applications using Streamlit for internal and external users. Integrate Python libraries (e.g., Pandas, NumPy, Plotly, Matplotlib) for data processing and visualization. Data Integration & Retrieval: Connect to and retrieve data from RESTful APIs, cloud storage (e.g., Azure Data Lake, Cognite Data Fusion, and SQL/NoSQL databases. Automate data ingestion pipelines and ensure data quality and consistency. Collaboration & Reporting: Work closely with business analysts, data engineers, and stakeholders to gather requirements and deliver insights. Present findings and recommendations through reports, dashboards, and presentations. Requirements: Bachelor’s or master’s degree in computer science, Data Science, Information Systems, or a related field. 3+ years of experience in data analytics or business intelligence roles. Proficiency in Power BI, including DAX, Power Query, and data modeling. Strong Python programming skills, especially with Streamlit, Pandas, and API integration. Experience with REST APIs, JSON/XML parsing, and cloud data platforms (Azure, AWS, or GCP). Familiarity with version control systems like Git. Excellent problem-solving, communication, and analytical skills. Preferred Qualifications: Experience with CI/CD pipelines for data applications. Knowledge of DevOps practices and containerization (Docker). Exposure to machine learning or statistical modeling is a plus. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunity Join our network as a Data Scientist and help fast-growing startups transform data into actionable insights, predictive models, and intelligent decision-making tools. You’ll work on real-world data challenges across domains like marketing, finance, healthtech, and AI—with full flexibility to work remotely and choose the engagements that best fit your goals. Role Overview As a Data Scientist, you will: Extract Insights from Data: Analyze complex datasets to uncover trends, patterns, and opportunities. Build Predictive Models: Develop, validate, and deploy machine learning models that solve core business problems. Communicate Clearly: Work with cross-functional teams to present findings and deliver data-driven recommendations. What You’ll Do Analytics & Modeling: Explore, clean, and analyze structured and unstructured data using statistical and ML techniques. Build predictive and classification models using tools like scikit-learn, XGBoost, TensorFlow, or PyTorch. Conduct A/B testing, customer segmentation, forecasting, and anomaly detection. Data Storytelling & Collaboration: Present complex findings in a clear, actionable way using data visualizations (e.g., Tableau, Power BI, Matplotlib). Work with product, marketing, and engineering teams to integrate models into applications or workflows. Technical Requirements & Skills Experience: 3+ years in data science, analytics, or a related field. Programming: Proficient in Python (preferred), R, and SQL. ML Frameworks: Experience with scikit-learn, TensorFlow, PyTorch, or similar tools. Data Handling: Strong understanding of data preprocessing, feature engineering, and model evaluation. Visualization: Familiar with visualization tools like Matplotlib, Seaborn, Plotly, Tableau, or Power BI. Bonus: Experience working with large datasets, cloud platforms (AWS/GCP), or MLOps practices. What We’re Looking For A data-driven thinker who can go beyond numbers to tell meaningful stories. A freelancer who enjoys solving real business problems using machine learning and advanced analytics. A strong communicator with the ability to simplify complex models for stakeholders. Why Join Us? Immediate Impact: Work on projects that directly influence product, growth, and strategy. Remote & Flexible: Choose your working hours and project commitments. Future Opportunities: BeGig will continue matching you with data science roles aligned to your strengths. Dynamic Network: Collaborate with startups building data-first, insight-driven products. Ready to turn data into decisions? Apply now to become a key Data Scientist for our client and a valued member of the BeGig network! Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

As a Data Scientist, you will work with a cross-functional team to identify business challenges and provide data-driven insights. You'll be responsible for data exploration, feature engineering, model development, and production deployment of machine learning solutions. We are seeking someone passionate about working with diverse datasets and applying machine learning techniques to deliver meaningful results. Key Responsibilities: • Collaborate with internal teams to understand business requirements and translate them into data-driven solutions. • Perform exploratory data analysis, data cleaning, and transformation. • Develop and deploy machine learning models and algorithms for predictive and prescriptive analysis. • Conduct A/B testing and evaluate the impact of model implementations. • Generate data visualizations and reports to communicate insights to stakeholders. • Stay updated with the latest developments in data science, machine learning, and industry trends. Qualifications: • Bachelor's degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. • 4+ years of experience working as a Data Scientist or in a similar role. • Strong proficiency in Python or R, and experience with machine learning libraries such as scikit-learn, TensorFlow, or PyTorch. • Knowledge of data processing frameworks, e.g., Pandas, NumPy, Spark. • Experience with data visualization tools like Tableau, Power BI, or Matplotlib. • Ability to query databases using SQL and familiarity with relational databases. • Familiarity with cloud platforms such as AWS, Azure, or GCP is a plus. • Strong analytical, problem-solving, and communication skills. • Fluent in Spanish and/or Portuguese, with good English proficiency. Must-Have Skills: Excellent English Communication Skills Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join us as a "Chief Control Office" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unapparelled customer experiences. You may be assessed on the key critical skills relevant for success in role, such as experience with MS office, SQL, Alteryx, Power Tools, Python as well as job-specific skillsets. To be successful as an "Analyst", you should have experience with: Basic/ Essential Qualifications Graduate in any discipline Experience in Controls, Governance, Reporting and Risk Management preferably in a financial services organisation Proficient in MS Office – PPT, Excel, Work & Visio Proficient in SQL, Alteryx and Python Generating Data Insights and Dashboards from large and diverse data sets Excellent experience on Tableau, Alteryx, MS Office (i.e. Advance Excel, PowerPoint) Automation skills using VBA, PowerQuery, PowerApps, etc. Experience in using ETL tools. Good understanding of Risk and Control Excellent communication skills (Verbal and Written) Good understanding of governance and control frameworks and processes Highly motivated, business-focussed and forward thinking. Experience in senior stakeholder management. Ability to manage relationships across multiple disciplines Desirable Skillsets/ Good To Have Experience in data crunching/ analysis including automation Experience in handling RDBMS (i.e. SQL/Oracle) Experience in Python, Data Science and Data Analytics Tools and Techniques e.g. MatPlotLib, Data Wrangling, Low Code/No Code environment development preferably in large bank on actual use cases Understanding of Data Management Principles and data governance Design and managing SharePoints Financial Services experience This role will be based out of Pune. Purpose of the role To design, develop and consult on the bank’s internal controls framework and supporting policies and standards across the organisation, ensuring it is robust, effective, and aligned to the bank’s overall strategy and risk appetite. Accountabilities Identification and analysis of emerging and evolving risks across functions to understand their potential impact, and likelihood. Communication of the purpose, structure, and importance of the control framework to all relevant stakeholders, including senior management and audit. Support to the development and implementation of the bank's internal controls framework and principles tailored to the banks specific needs and risk profile including design, monitoring, and reporting initiatives . Monitoring and maintenance of the control's frameworks, to ensure compliance and adjust and update as internal and external requirements change. Embedment of the control framework across the bank through cross collaboration, training sessions and awareness campaigns which fosters a culture of knowledge sharing and improvement in risk management and the importance of internal control effectiveness. Analyst Expectations To meet the needs of stakeholders/ customers through specialist advice and support Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. Likely to have responsibility for specific processes within a team They may lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. They supervise a team, allocate work requirements and coordinate team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they manage own workload, take responsibility for the implementation of systems and processes within own work area and participate on projects broader than direct team. Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. Check work of colleagues within team to meet internal and stakeholder requirements. Provide specialist advice and support pertaining to own work area. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how all teams in area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative / operational expertise. Make judgements based on practise and previous experience. Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day to day administrative requirements. Build relationships with stakeholders/ customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Join us as an "BA4 - Control Data Analytics and Reporting" at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unapparelled customer experiences. To be successful as an "BA4 - Control Data Analytics and Reporting", you should have experience with: Basic/ Essential Qualifications Graduate in any discipline. Experience in Controls, Governance, Reporting and Risk Management preferably in a financial services organization. Proficient in MS Office – PPT, Excel, Work & Visio. Proficient in SQL, Tableau and Python. Generating Data Insights and Dashboards from large and diverse data sets. Excellent experience on Tableau, Alteryx, MS Office (i.e. Advance Excel, PowerPoint). Automation skills using VBA, Power Query, PowerApps, etc. Experience in using ETL tools. Good understanding of Risk and Control. Excellent communication skills (Verbal and Written). Good understanding of governance and control frameworks and processes. Highly motivated, business-focused and forward thinking. Experience in senior stakeholder management. Ability to manage relationships across multiple disciplines. Self-driven and proactively participates in team initiatives. Demonstrated initiative in identifying and resolving problems. Desirable Skillsets/ Good To Have Experience in data crunching/ analysis including automation. Experience in handling RDBMS (i.e. SQL/Oracle). Experience in Python, Data Science and Data Analytics Tools and Techniques e.g. MatPlotLib, Data Wrangling, Low Code/No Code environment development preferably in large bank on actual use cases. Understanding of Data Management Principles and data governance. Design and managing SharePoint. Financial Services experience. Location: Noida. You may be assessed on the key critical skills relevant for success in role, such as experience with MS office, MS Power Platforms, Python, Tableau as well as job-specific skillsets. Additional experience in Alteryx would be an added advantage. Purpose of the role To design, develop and consult on the bank’s internal controls framework and supporting policies and standards across the organisation, ensuring it is robust, effective, and aligned to the bank’s overall strategy and risk appetite. Accountabilities Identification and analysis of emerging and evolving risks across functions to understand their potential impact, and likelihood. Communication of the purpose, structure, and importance of the control framework to all relevant stakeholders, including senior management and audit. Support to the development and implementation of the bank's internal controls framework and principles tailored to the banks specific needs and risk profile including design, monitoring, and reporting initiatives . Monitoring and maintenance of the control's frameworks, to ensure compliance and adjust and update as internal and external requirements change. Embedment of the control framework across the bank through cross collaboration, training sessions and awareness campaigns which fosters a culture of knowledge sharing and improvement in risk management and the importance of internal control effectiveness. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Function: Data Science Job: Machine Learning Engineer Position: Senior Immediate manager (N+1 Job title and name): AI Manager Additional reporting line to: Global VP Engineering Position location: Mumbai, Pune, Bangalore, Hyderabad, Noida. 1. Purpose of the Job – A simple statement to identify clearly the objective of the job. The Senior Machine Learning Engineer is responsible for designing, implementing, and deploying scalable and efficient machine learning algorithms to solve complex business problems. The Machine Learning Engineer is also responsible of the lifecycle of models, once deployed in production environments, through monitoring performance and model evolution. The position is highly technical and requires an ability to collaborate with multiple technical and non-technical profiles (data scientists, data engineers, data analysts, product owners, business experts), and actively take part in a large data science community. 2. Organization chart – Indicate schematically the position of the job within the organization. It is sufficient to indicate one hierarchical level above (including possible functional boss) and, if applicable, one below the position. In the horizontal direction, the other jobs reporting to the same superior should be indicated. A Machine Learning Engineer reports to the AI Manager who reports to the Global VP Engineering. 3. Key Responsibilities and Expected Deliverables– This details what actually needs to be done; the duties and expected outcomes. Managing the lifecycle of machine learning models Develop and implement machine learning models to solve complex business problems. Ensure that models are accurate, efficient, reliable, and scalable. Deploy machine learning models to production environments, ensuring that models are integrated with software systems. Monitor machine learning models in production, ensuring that models are performing as expected and that any errors or performance issues are identified and resolved quickly. Maintain machine learning models over time. This includes updating models as new data becomes available, retraining models to improve performance, and retiring models that are no longer effective. Develop and implement policies and procedures for ensuring the ethical and responsible use of machine learning models. This includes addressing issues related to bias, fairness, transparency, and accountability. Development of data science assets Identify cross use cases data science needs that could be mutualised in a reusable piece of code. Design, contribute and participate in the implementation of python libraries answering a data science transversal need that can be reused in several projects. Maintain existing data science assets (timeseries forecasting asset, model monitoring asset) Create documentation and knowledge base on data science assets to ensure a good understanding from users. Participate to asset demos to showcase new features to users. Be an active member of the Sodexo Data Science Community Participate to the definition and maintenance of engineering standards and set of good practices around machine learning. Participate to data science team meeting and regularly share knowledge, ask questions, and learn from others. Mentor and guide junior machine learning engineers and data scientists. Participate to internal or external relevant conferences and meet ups. Continuous Improvements Stay up to date with the latest developments in the field: read research papers, attend conferences, and participate in trainings to expand their knowledge and skills. Identify and evaluate new technologies and tools that can improve the efficiency and effectiveness of machine learning projects. Propose and implement optimizations for current machine learning workflows and systems. Proactively identify areas of improvement within the pipelines. Make sure that created code is compliant with our set of engineering standards. Collaboration with other data experts (Data Engineers, Platform Engineers, and Data Analysts) Participate to pull requests reviews coming from other team members. Ask for review and comments when submitting their own work. Actively participate to the day-to-day life of the project (Agile rituals), the data science team (DS meeting) and the rest of the Global Engineering team 4. Education & Experience – Indicate the skills, knowledge and experience that the job holder should require to conduct the role effectively Engineering Master’s degree or PhD in Data Science, Statistics, Mathematics, or related fields 5 years+ experience in a Data Scientist / Machine Learning Engineer role into large corporate organizations Experience of working with ML models in a cloud ecosystem Statistics & Machine Learning Statistics : Strong understanding of statistical analysis and modelling techniques (e.g., regression analysis, hypothesis testing, time series analysis) Classical ML : Very strong knowledge in classical ML algorithms for regression & classification, supervised and unsupervised machine learning, both theoretical and practical (e.g. using scikit-learn, xgboost) ML niche: Expertise in at least one of the following ML specialisations: Timeseries forecasting / Natural Language Processing / Computer Vision Deep Learning: Good knowledge of Deep Learning fundamentals (CNN, RNN, transformer architecture, attention mechanism, …) and one of the deep learning frameworks (pytorch, tensorflow, keras) Generative AI: Good understanding of Generative AI specificities and previous experience in working with Large Language Models is a plus (e.g. with openai, langchain) MLOps Model strategy : Expertise in designing, implementing, and testing machine learning strategies. Model integration : Very strong skills in integrating a machine learning algorithm in a data science application in production. Model performance: Deep understanding of model performance evaluation metrics and existing libraries (e.g., scikit-learn, evidently) Model deployment: Experience in deploying and managing machine learning models in production either using specific cloud platform, model serving frameworks, or containerization. Model monitoring : Experience with model performance monitoring tools is a plus (Grafana, Prometheus) Software Engineering Python: Very strong coding skills in Python including modularity, OOP, data & config manipulation frameworks (e.g., pandas, pydantic) etc. Python ecosystem: Strong knowledge of tooling in Python ecosystem such as dependency management tooling (venv, poetry), documentation frameworks (e.g. sphinx, mkdocs, jupyter-book), testing frameworks (unittest, pytest) Software engineering practices: Experience in putting in place good software engineering practices such as design patterns, testing (unit, integration), clean code, code formatting etc. Debugging : Ability to troubleshoot and debug issues within machine learning pipelines Data Science Experimentation and Analytics Data Visualization : Knowledge of data visualization tools such as plotly, seaborn, matplotlib, etc. to visualise, interpret and communicate the results of machine learning models to stakeholders. Basic knowledge of PowerBI is a plus Data Cleaning : Experience with data cleaning and preprocessing techniques such as feature scaling, dimensionality reduction, and outlier detection (e.g. with pandas, scikit-learn). Data Science Experiments : Understanding of experimental design and A/B testing methodologies Data Processing: Databricks/Spark : Basic knowledge of PySpark for big data processing Databases : Basic knowledge of SQL to query data in internal systems Data Formats : Familiarity with different data storage formats such as Parquet and Delta DevOps Azure DevOps : Experience using a DevOps platform such as Azure DevOps for using Boards, Repositories, Pipelines Git: Experience working with code versioning (git), branch strategies, and collaborative work with pull requests. Proficient with the most basic git commands. CI / CD : Experience in implementing/maintaining pipelines for continuous integration (including execution of testing strategy) and continuous deployment is preferable. Cloud Platform: Azure Cloud : Previous experience with services like Azure Machine Learning Services and/or Azure Databricks on Azure is preferable. Soft skills Strong analytical and problem-solving skills, with attention to detail Excellent verbal and written communication and pedagogical skills with technical and non-technical teams Excellent teamwork and collaboration skills Adaptability and reactivity to new technologies, tools, and techniques Fluent in English 5. Competencies – Indicate which of the Sodexo core competencies and any professional competencies that the role requires Communication & Collaboration Adaptability & Agility Analytical & technical skills Innovation & Change Rigorous Problem Solving & Troubleshooting Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Chicmic Studios Job Role: Data Scientist Experience Required: 3+ Years Skills Required: Data Science, Python, Pandas, Matplotlibs Job Description: We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus. Roles & Duties: Analyze and process large datasets using Python and Pandas. Develop and optimize machine learning models for predictive analytics. Create data visualizations using Matplotlib and Seaborn to support decision-making. Perform data cleaning, feature engineering, and statistical analysis. Work with structured and unstructured data to extract meaningful insights. Implement and fine-tune NER models for specific use cases (if required). Collaborate with cross-functional teams to drive data-driven solutions Required Skills & Qualifications: Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.). Experience in data analysis, statistical modeling, and machine learning. Hands-on expertise in data visualization using Matplotlib and Seaborn. Understanding of SQL and database querying. Familiarity with NLP techniques and NER models is a plus. Strong problem-solving and analytical skills. Contact: 9875952836 Office Address: F273, Phase 8B industrial Area, Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

We are seeking a passionate and dedicated machine learning intern to join our team. As an intern, you will work closely with experienced data scientists and engineers to develop, test, and deploy machine learning models. This is an excellent opportunity to gain hands-on experience with cutting-edge technologies and contribute to real-world projects. Selected Intern's Day-to-day Responsibilities Include Collaborate with the team to collect, preprocess, and analyze datasets. Develop and train machine learning models for specific use cases. Evaluate model performance using appropriate metrics and improve accuracy and efficiency. Assist in deploying ML models into production environments. Document processes, findings, and insights throughout the project lifecycle. Stay updated on the latest trends and advancements in machine learning and AI. Qualifications Currently pursuing or recently completed a degree in computer science, data science, Machine Learning, or a related field. Strong knowledge of programming languages such as Python (preferred) or R. Familiarity with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn. Basic understanding of supervised and unsupervised learning, NLP, or computer vision. Hands-on experience with data preprocessing and visualization libraries (e.g., Pandas, NumPy, Matplotlib). Problem-solving mindset and eagerness to learn new technologies. Preferred Skills Experience with SQL and working with databases. Knowledge of cloud platforms like AWS, Google Cloud, or Azure. Understanding of version control systems (e.g., Git). About Company: LineupX is a B2B company that streamlines talent acquisition in just 4 easy steps. Our solution integrates human and machine intelligence to deliver an unparalleled experience for organizations and individuals alike. The industry we aim to revolutionize is currently valued at $400 billion globally. LineupX utilizes technology-driven recruitment solutions, employing machine learning and human expertise to redefine global talent acquisition practices. LineupX is an early-stage startup with promising traction, supported by angel investments, and generating revenue. We have been recognized by SINE IIT Bombay as a high-potential startup and endorsed by Canqbate50, a Canada-based startup program aimed at identifying and funding the top 50 startups from India to establish operations in Canada. Show more Show less

Posted 1 week ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 week ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 4+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role Mandatory SkillGen AI, LLM, GPT-3, RAG, Lang chain, Llama, AI/ML, DL, NLP or Computer Vision, Python, Tensorflow, Pytorch, Django, Keras, Pandas, NumPy Preferred SkillAWS or GCP or Azure Cloud, MLOps, GPT-4, SQL, Scikit-learn, Matplotlib, Seaborn, Hadoop, Spark, or Apache Flink, Banking exposure Required Skill Programming LanguagesProficiency in Python, R, or Java. Experience with libraries and frameworks such as TensorFlow, PyTorch, Keras, Scikit-learn, etc. Machine LearningStrong understanding of machine learning algorithms, including supervised and unsupervised learning, reinforcement learning, and deep learning. Data ScienceExperience with data analysis, data visualization, and statistical modeling. Proficiency in using tools like Pandas, NumPy, Matplotlib, and Seaborn. Big Data TechnologiesFamiliarity with big data processing frameworks like Hadoop, Spark, or Apache Flink. Cloud PlatformsExperience with cloud services such as AWS, Google Cloud, or Azure for deploying AI solutions. Natural Language Processing (NLP)Knowledge of NLP techniques and tools such as NLTK, SpaCy, or GPT-3. Computer VisionExperience with computer vision techniques and libraries such as OpenCV, YOLO, or Fast R-CNN. Problem-SolvingStrong analytical and problem-solving skills with the ability to think critically and creatively. CommunicationExcellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Team CollaborationAbility to work effectively in a collaborative team environment and contribute to team success. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description As a Principal AI Engineer, he will be part of a high performing team working on exciting opportunities in AI within Ford Credit. We are looking for a highly skilled, technical, hands-on AI engineer with a solid background in building end-to-end AI applications, exhibiting a strong aptitude for learning and keeping up with the latest advances in AIDevelop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . He should be able to lead technical discussion and technical mentor for the team. Responsibilities Excellent in communication and presentation skills. Ability to do stakeholder management. Ability to collaborate with a cross-functional team involving data engineers, solution architects, application engineers, and product teams across time zones to develop data and model pipelines. Ability to drive and mentor the team technically, leveraging cutting edge AI and Machine Learning principles and develop production-ready AI solutions. Mentor the team of data scientists and assume responsible for the delivery of use cases. Ability to scope the problem statement, data preparation, training and making the AI model production ready. Work with business partners to understand the problem statement, translate the same into analytical problem. Ability to manipulate structured and unstructured data. Develop, test and improve existing machine learning models. Analyse large and complex data sets to derive valuable insights. Research and implement best practices to enhance existing machine learning infrastructure. Develop prototypes for future exploration. Design and evaluate approaches for handling large volume of real data streams. Ability to determine appropriate analytical methods to be used. Understanding of statistics and hypothesis testing. Qualifications Professional Experience: Potential candidates should possess 10+ years of strong working experience in AI. BE/MSc/ MTech /ME/PhD (Computer Science/Maths, Statistics). Possess a strong analytical mindset and be very comfortable with data. Experience with handling both relational and non-relational data. Hands-on experience with analytics methods (descriptive/predictive/prescriptive), Statistical Analysis, Probability and Data Visualization tools (Python-Matplotlib, Seaborn). Background of Software engineering with excellent Data Science working experience. Technical Experience: Develop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies