Home
Jobs

626 Matplotlib Jobs - Page 15

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Job Description: Data Analytics Specialist (Job Location – Gurgaon) We are seeking a highly skilled and experienced Data Analytics Specialist with 5+ years of professional experience preferably with experience working in Retail Industry for Europe Market. The ideal candidate will have deep expertise in SQL and Excel, as well as a solid working knowledge of statistics using Python. This role requires a strategic thinker who can transform data into actionable insights to support marketing and business objectives. Key Responsibilities Design and implement data analysis to support marketing initiatives and strategic decision-making. Develop and maintain SQL queries to extract and manipulate data from various sources. Utilize Python for statistical analysis and predictive modeling to forecast trends and measure campaign effectiveness. Collaborate with marketing, sales, and product teams to understand data needs and deliver actionable insights. Ensure data quality and consistency across marketing datasets and platforms. Identify opportunities for process improvements and optimization through data analysis. Qualifications 5+ years of experience in data analysis or marketing analytics roles. Advanced proficiency in SQL for querying and managing large datasets. Expert-level skills in Microsoft Excel, including pivot tables, VLOOKUP, and advanced formulas. Proficiency in Python for statistical analysis and data visualization (e.g., pandas, matplotlib, seaborn). Strong analytical and problem-solving skills with keen attention to detail. Excellent communication skills and the ability to present complex information clearly and concisely. A Bachelor’s or master’s degree in Statistics, Mathematics, Marketing, Computer Science, or a related field would be preferred. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Quality Analytics. Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Minimum Qualifications Bachelor’s degree in Data science, computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 3+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or QlikSense. 1+ years' experience in SQL programming language and relational databases. Preferred Qualifications An MS/PhD in Computer Science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Google, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

TCS is having virtual interview for skill Ai Engineer on 7 Jun 2025(Saturday) Mode of Interview:online Exp Req: 5 to 10 yrs JD Must-Have 5 - 10 years of IT experience Strong programming skills in Python Expertise in machine learning is a must. In depth knowledge of various machine learning models and techniques; deep learning, supervised and unsupervised learning, natural language processing, and reinforcement learning. Expertise in data analysis and visualization to extract insights from large datasets and transmit them virtually. Good knowledge on data mining, statistical methods, data wrangling, and visualization tools like Power BI, Tableau and matplotlib. Hands on skills in Data Manipulation Language. Expertise in various machine learning frameworks - TensorFlow, Scikit-Learn and PyTorch. Good-to-Have Gen AI Certification Experience in Containers (Docker), Kubernetes, Kafka (or other messaging platform), Apache Camel, RabbitMQ, Active MQ, Storage / RDBMS and No-SQL databases etc.. Show more Show less

Posted 2 weeks ago

Apply

3.0 - 7.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred technical and professional experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Title Software Engineer Location: Multiple Locations Job Term: Full-Time The Opportunity: At Picarro, Software Engineering focuses on developing and deploying industry vertical applications to clients in the Scientific and Energy communities. This specific role is focused on the suite of solutions, such as greenhouse gas emissions quantification, pipe replacement, and advanced leak detection, used by our gas utility and pipeline customers. The majority have a web-based user interface, but the backend utilizes geoprocessing, data, and ML services. While the products are designed to meet the needs of the industry, they sit within Picaro's larger analytical suite/ distributed framework, so a wider collection of skills is desired. The software engineer participates in the design, programming, testing, documentation and implementation of applications and related processes/systems. You may also be required to identify and evaluate development options, assess future needs for key technologies and techniques and develop plans for adoption. This position reports to the GIS Software Development Manager. The position will be on site based out of our Bangalore, India office. Key Responsibilities: Work directly with product stakeholders and product management to understand product use cases and synthesize business requirements. Design, develop, and deploy high-performance multi-tenant dashboard applications using Dash Enterprise. Write production-quality code that creates responsive web applications. Handle multiple technical requests and project deadlines simultaneously. Collaborate daily with a global team of software engineers, data scientists, analysts, and product managers using a variety of online communication channels. Apply software development best practices including version control (Git), code review, and testing. Document technical detail of work using Jira and Confluence. Desired Skills and Experience: 8+ years of overall software development experience. 5+ years of experience developing responsive web applications using HTML, CSS, and JavaScript. 3+ years of Python experience, specifically in an object-oriented structure. Experience with common data analytics and visualization libraries such as Numpy, Pandas, Json, Sqlalchemy, Plotly, and/or Matplotlib. Experience with geospatial libraries such as Shapely, GeoPandas, GDAL/OGR, and PyProj are a plus. 1+ years with SQL for analytical use cases. 1+ years of experience with a modern web UI library, like React, Vue, Angular, or Svelte. 1+ years of experience developing web applications using Python. 1+ years of experience with at least one common data visualization tool such as Tableau, PowerBI, Qlik, or Dash Enterprise. 1+ years of cloud development (e.g. AWS, Azure, Google Cloud) and software container technologies (e.g. Docker, Kubernetes). Familiar with Agile methodologies and processes. Familiar with Gas Distribution company processes and/or pipeline and distribution network data. Bachelor or master's degree in computer science, engineering, GIS, geography, or related field. About Picarro: Picarro, Inc. is the world's leading producer of greenhouse gas and optical stable isotope instruments, which are used in a wide variety of scientific and industrial applications, including: atmospheric science, air quality, greenhouse gas measurements, gas leak detection, food safety, hydrology, ecology and more. The company's products are all designed and manufactured at Picarro's Santa Clara, California headquarters and exported to countries worldwide. Picarro's products are based on dozens of patents related to cavity ring-down spectroscopy (CRDS) technology. Picarro's solutions are unparalleled in their precision, ease of use, portability, and reliability. Honors awarded the Company include the World Economic Forum Technology Innovation Pioneer, IHS CERA Energy Innovation Pioneer, the U.S. Department of Energy Small Business of the Year, the TiE50 Winner and the Red Herring Global 100 Winner. Key investors include Benchmark Capital Partners, Greylock Management Corporation, Duff, Ackerman & Goodrich, Stanford University, Focus Ventures, Mingxin China Growth Ltd., NTT Finance and Weston Presidio Capital. All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, national origin, protected veteran status, gender identity, social orientation, nor on the basis of disability. Posted positions are not open to third party recruiters/agencies and unsolicited resume submissions will be considered free referral. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact Picarro, Inc. at disabilityassistance@picarro.com for assistance. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

TechnipFMC is committed to driving real change in the energy industry. Our ambition is to build a sustainable future through relentless innovation and global collaboration – and we want you to be part of it. You’ll be joining a culture that values curiosity, expertise, and ideas as well as diversity, inclusion, and authenticity. Bring your unique energy to our team of more than 20,000 people worldwide, and discover a rewarding, fulfilling, and varied career that you can take in anywhere you want to go. Job Purpose Seeking a skilled Python Developer to join our team and help us develop applications and tooling to streamline in-house engineering design processes with a continuous concern for quality, targets, and customer satisfaction. Job Description Write clean and maintainable Python code using PEP guidelines Build and maintain software packages for scientific computing Build and maintain command line interfaces (CLIs) Build and maintain web applications and dashboards Design and implement data analysis pipelines Create and maintain database schemas and queries Optimise code performance and scalability Develop and maintain automated tests to validate software Contribute and adhere to team software development practices, e.g., Agile product management, source code version control, continuous integration/deployment (CI/CD) Build and maintain machine learning models (appreciated, but not a prerequisite) Technical Stack Languages: Python, SQL Core libraries: Scipy, Pandas, NumPy Web frameworks: Streamlit, Dash, Flask Visualisation: Matplotlib, Seaborn, Plotly Automated testing: pytest CLI development: Click, Argparse Source code version control: Git Agile product management: Azure DevOps, GitHub CI/CD: Azure Pipelines, Github Actions, Docker Database systems: PostgreSQL, Snowflake, SQlite, HDF5 Performance: Numba, Dask Machine Learning: Scikit-learn, TensorFlow, PyTorch (Desired) You Are Meant For This Job If Bachelor's degree in computer science or software engineering Master's degree is a plus Strong technical basis in engineering Presentation skills Good organizational and problem-solving skills Service/Customer oriented Ability to work in a team-oriented environment Good command of English Skills Spring Boot Data Modelling CI/CD Internet of Things (IoT) Jira/Confluence React/Angular SAFe Scrum Kamban Collaboration SQL Bash/Shell/Powershell AWS S3 AWS lambda Cypress/Playwright Material Design Empirical Thinking Agility Github HTML/CSS Javascript/TypeScript GraphQL Continuous Learning Cybersecurity Computer Programming Java/Kotlin Test Driven Development Being a global leader in the energy industry requires an inclusive and diverse environment. TechnipFMC promotes diversity, equity, and inclusion by ensuring equal opportunities to all ages, races, ethnicities, religions, sexual orientations, gender expressions, disabilities, or all other pluralities. We celebrate who you are and what you bring. Every voice matters and we encourage you to add to our culture. TechnipFMC respects the rights and dignity of those it works with and promotes adherence to internationally recognized human rights principles for those in its value chain. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

TCS is conducting Virtual drive on Friday dated 6th June, 2025 Position: AI Engineer Location: Pune Years of Experience: 5-9 yrs(Accurate) Notice Period: Immediate Joiners or 0-30 days NP (Note: Candidates sharing details below experience range will not be considered) Responsibilities: Strong understanding of machine learning techniques, including deep learning, reinforcement learning, and predictive modeling. Demonstrable experience in AI, ML, NLP, or related fields, with a robust portfolio showcasing practical application of these technologies. Proficient in Python and familiar with frameworks for AI development, such as TensorFlow, PyTorch, Keras, scikit-learn, NLTK, Langchain. Experience in utilizing Python libraries such as NumPy, Pandas, Matplotlib, Plotly Proficient in programming languages such as Python Experience with ML, deep learning, TensorFlow, Python, NLP Experience in program leadership, governance, and change enablement Knowledge of basic algorithms, object-oriented and functional design principles, and best-practice patterns Experience in REST API development, NoSQL database design, and RDBMS design and optimizations Familiarity with modern data platforms and cloud services like AWS, Azure for deploying scalable AI solutions. Kindly attach your updated CVs Thanks & Regards Shilpa Silonee Show more Show less

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Elevate Your Impact Through Evalueserve Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4,500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work® in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development, work-life balance, and equal opportunity for all. About Risk and Quant Solutions (RQS) Risk and Quant is one of the fastest growing practices at Evalueserve. As an RQS team member, you will address some of the world’s largest financial needs with technology proven solutions. You would solve these banking challenges and improve decision making with award winning solutions. What you will be doing at Evalueserve Ability to work on data manipulation, pre-processing and analysis using various python packages like NumPy, pandas, matplotlib Data handling and data type conversions using SQL Very good experience in Python and SQL Ability to analyse the problems and provide solutions for it Run test management processes for medium to large scale projects (Test Strategy/Approach, documentation, managing User Acceptance Testing, building test plans and test scenarios, building implementation plans.) Analyse and debug issues seen while conducting tests What we’re looking for B. Tech from reputed university 5-8 years of investment banking/ BFSI experience Knowledge of Python and SQL is mandatory FRM/PRM would be beneficial. Disclaimer : The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Job Profile -- Data Science trainer ( Full-time ) SkillCircle is seeking a passionate and experienced Data Science Trainer to join our team. The trainer will be responsible for delivering high-quality training sessions to students, professionals, and corporate clients, helping them gain expertise in data science concepts, tools, and applications. The ideal candidate should have strong technical knowledge, hands-on experience, and excellent teaching abilities. Key Responsibilities: Design, develop, and deliver comprehensive training programs on Data Science, Machine Learning, Artificial Intelligence, and related technologies . Conduct interactive and engaging training sessions, both online and offline, for individuals and corporate clients. Explain complex concepts in a simplified and practical manner to cater to learners from different backgrounds. Provide hands-on training using Python, R, SQL, Tableau, Power BI, and cloud platforms like AWS/GCP/Azure. Develop and update course materials, including presentations, assignments, case studies, and projects . Guide students on real-world projects, capstone projects, and industry case studies . Evaluate students' performance, provide feedback, and help them improve their technical skills. Stay updated with the latest trends and advancements in Data Science, AI, and ML to enhance the training curriculum. Assist in curriculum development and enhancement based on industry demands. Conduct doubt-clearing sessions, mentorship programs, and career guidance for learners. Collaborate with the SkillCircle team to organize workshops, webinars, hackathons, and boot camps . Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Statistics, Mathematics, or a related field . 2+ years of experience in Data Science, AI, or Machine Learning , with at least 1 year of training/teaching experience . Proficiency in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch), R, SQL, and data visualization tools (Tableau, Power BI, Matplotlib, Seaborn) . Strong knowledge of statistics, probability, data preprocessing, feature engineering, model building, and evaluation techniques . Hands-on experience with Machine Learning, Deep Learning, and NLP . Knowledge of Big Data technologies (Hadoop, Spark) and Cloud Computing (AWS, GCP, Azure) is a plus. Excellent communication, presentation, and public speaking skills. Ability to motivate, mentor, and guide students in their learning journey. Strong problem-solving skills and a passion for teaching. Benefits & Perks: Opportunity to work with a dynamic team and grow in the EdTech industry . Access to SkillCircles premium courses and learning materials. Networking opportunities with industry experts. Excellent learning culture, no work pressure Performance based incentives Job location - Gurugram Salary - Open to negotiate IMMEDIATE JOINERS , Or Notice period under one week will be preferred. You can directly reach out on ( 88104 42847) and (Ishika@skillcircle.in) Show more Show less

Posted 2 weeks ago

Apply

1.0 years

2 - 7 Lacs

Bengaluru

On-site

GlassDoor logo

Job Description We are currently looking to hire a highly motivated Data Scientist who has the hunger to solve our complex technical and business challenges. If you want to be part of our journey and make an impact. Apply now! YOUR ROLE AT SIXT You will build and maintain robust ETL pipelines for collecting and processing data related to pricing, competitors, and ancillary products You perform deep exploratory data analysis to uncover trends and insights You generate clean, aggregated datasets to support reporting and dashboards You will collaborate with cross-functional teams to define data requirements and deliver actionable insights You will apply basic statistical models to forecast or explain pricing and customer behaviour You create clear, concise visualizations to communicate findings to stakeholders YOUR SKILLS MATTER B.Tech/B.E/ Master’s Degree in Computer Science or similar discipline You have 1-3 years of relevant experience in data engineering or data science Programming : Proficiency in Python and Pandas for data manipulation and analysis ETL Development : Experience designing and implementing ETL pipelines, including data cleaning, aggregation, and transformation Workflow Orchestration : Hands-on with Airflow for scheduling and monitoring ETL jobs Cloud & Serverless Computing : Exposure to AWS services such as Batch, Fargate, and Lambda for scalable data processing Containerization : Familiarity with Docker for building and deploying reproducible environments EDA & Visualization : Strong exploratory data analysis skills and ability to communicate insights using data visualization libraries (e.g., Matplotlib, Seaborn, Plotly) Basic Predictive Modelling : Understanding of foundational machine learning techniques for inference and reporting Good communication skills WHAT WE OFFER Cutting-Edge Tech: You Will be part of a dynamic tech-driven environment where innovation meets impact! We offer exciting challenges, cutting-edge technologies, and the opportunity to work with brilliant minds Competitive Compensation: A market-leading salary with performance-based rewards Comprehensive Benefits: Health insurance, wellness programs, and generous leave policies Flexibility & Work-Life Balance: Our culture fosters continuous learning, collaboration, and flexibility, ensuring you grow while making a real difference. Hybrid Work policies Additional Information About the department: Engineers take note: cutting edge technology is waiting for you! We don't buy, we primarily do it all ourselves: all core systems, whether in the area of car sharing, car rental, ride hailing and much more, are developed and operated by SIXT itself. Our technical scope ranges from cloud and on-site operations through agile software development. We rely on state-of-the-art frameworks and architectures and strive for a long-term technical approach. Exciting? Then apply now! About us: We are a leading global mobility service provider with sales of €3.07 billion and around 9,000 employees worldwide. Our mobility platform ONE combines our products SIXT rent (car rental), SIXT share (car sharing), SIXT ride (cab, driver and chauffeur services), SIXT+ (car subscription) and gives our customers access to our fleet of 222,000 vehicles, the services of 1,500 cooperation partners and around 1.5 million drivers worldwide. Together with our franchise partners, we are present in more than 110 countries at 2,098 rental stations. At SIXT, a first-class customer experience and outstanding customer service are our top priorities. We focus on true entrepreneurship and long-term stability and align our corporate strategy with foresight. Want to take off with us and revolutionize the world of mobility? Apply now!

Posted 2 weeks ago

Apply

8.0 - 12.0 years

25 - 32 Lacs

Bengaluru

Work from Office

Naukri logo

Robust knowledge of the Python programming (key libraries: pandas, openpyxl, pptx, matplotlib, shapefile). Solid expertise with Gradio, main platform used by the user as interface, to create input/output visualizations. Proficient in VS Code and ADO pipeline management. Fluency in construction of data visualization and analytics.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 3rd June 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀 Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Chennai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Responsibilities: Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports Mentor junior analysts and establish best practices for data visualization Required Skills: Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker) Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake Experience with data governance, lineage tracking, and big data tools (Spark, Kafka) Exposure to machine learning and AI-powered analytics Nice to Have: Experience with graph analytics, geospatial data, and visualization libraries (D3 js, Plotly) Hands-on experience with BI automation and AI-driven analytics Who can be a part of the community? We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Chennai District, Tamil Nadu

On-site

Indeed logo

Overview We are seeking a skilled Python Developer to join our dynamic team. The ideal candidate will have a strong background in Python development and a passion for creating efficient and scalable software solutions. Job description Knowledge of Python syntax, data types, and control flow. Experience with loops, conditionals, and functions. Familiar with Python libraries like math, datetime, and random. Understanding of lists, tuples, dictionaries, sets, and working with collections. Proficiency in algorithms for sorting, searching, and basic problem-solving. Familiar with time and space complexity (Big-O notation). Experience with web frameworks like Flask or Django. Knowledge of RESTful APIs and integrating with front-end technologies. Working with databases (SQL or NoSQL) using Python. Proficiency in libraries like Pandas, NumPy, and Matplotlib. Knowledge of machine learning frameworks such as Scikit-learn or TensorFlow. Familiarity with data preprocessing, feature engineering, and model evaluation. Writing scripts for automating tasks (e.g., web scraping with BeautifulSoup or Selenium). Experience with regular expressions and file handling. Familiar with Git for version control and collaborating with teams. Experience writing unit tests with unittest or pytest. Familiar with debugging tools and techniques in Python. Job Summary We are seeking a skilled Python Developer to join our dynamic team. The ideal candidate will be responsible for developing and maintaining high-quality software solutions using Python programming language. Responsibilities Develop Python-based software applications Collaborate with the IT infrastructure team to integrate user-facing elements with server-side logic Write effective, scalable code Implement security and data protection measures Test and debug programs Manage code repositories on GitHub Participate in software design meetings and analyze user needs * "We're hiring across Tamil Nadu. only" WP:+91 95854 01234 Job Type: Full-time Pay: ₹61.58 - ₹65.44 per hour Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

0.0 - 1.0 years

0 Lacs

Pitampura, Delhi, Delhi

On-site

Indeed logo

Job Title: Data Analyst (Python & Web Scraping Expert) Location : Netaji Subhash Place, Pitampura, New Delhi Department : Data Analytics / Share Recovery Job Overview: We are seeking a detail-oriented and results-driven Data Analyst to join our team. The ideal candidate will have expertise in Python programming, web scraping, and data analysis, with a focus on IEPF share recovery . The role involves collecting, processing, and analyzing data from multiple online sources, providing actionable insights to support business decision-making. Key Responsibilities: Data Scraping : Use Python and web scraping techniques to gather data from financial, regulatory, and shareholding-related websites for IEPF (Investor Education and Protection Fund) share recovery. Data Cleaning & Preprocessing : Clean, process, and structure raw data for analysis. Ensure data quality and integrity by identifying and correcting errors in datasets. Data Analysis & Visualization : Analyze large datasets to extract actionable insights regarding share recovery and trends in investor shareholding. Present findings through visualizations (e.g., graphs, dashboards). Reporting : Prepare and present detailed reports on share recovery patterns, trends, and forecasts based on analysis. Present findings to the management team to help drive business decisions. Automation & Optimization : Build and maintain automated web scraping systems to regularly fetch updated shareholding data, optimizing the data pipeline for efficiency. Collaboration : Work closely with business stakeholders to understand data requirements and deliver reports or visualizations tailored to specific needs related to IEPF share recovery. Required Skills & Qualifications: Technical Skills : Strong proficiency in Python for data analysis and automation. Expertise in web scraping using libraries such as BeautifulSoup , Selenium , and Scrapy . Experience with data manipulation and analysis using Pandas , NumPy , and other relevant libraries. Familiarity with SQL for data extraction and querying relational databases. Knowledge of data visualization tools like Matplotlib , Seaborn , or Tableau for presenting insights in an easy-to-understand format. Experience : Minimum of 2-3 years of experience as a Data Analyst or in a similar role, with a focus on Python programming and web scraping. Experience working with financial or investment data, particularly in areas such as IEPF , share recovery , or investor relations . Strong problem-solving skills with the ability to analyze complex datasets and generate actionable insights. Additional Skills : Strong attention to detail and ability to work with large datasets. Ability to work in a collaborative team environment. Familiarity with cloud platforms (e.g., AWS, Google Cloud) and data storage (e.g., databases, cloud data lakes) is a plus. Education : Bachelor’s or Master’s degree in Data Science , Computer Science , Statistics , Finance , or a related field. Soft Skills : Strong communication skills, with the ability to explain technical concepts to non-technical stakeholders. Ability to prioritize tasks and manage multiple projects simultaneously. Strong organizational skills and time management. Preferred Skills: Experience working in the financial industry or understanding of regulatory frameworks (e.g., IEPF regulations and procedures). Familiarity with machine learning models and predictive analytics for forecasting share recovery trends. Ability to automate workflows and optimize existing data collection pipelines. Job Requirements: Comfortable working in a fast-paced environment. Ability to think critically and provide insights that drive strategic decisions. Must be self-motivated and capable of working independently with minimal supervision. Willingness to stay updated with the latest data analysis techniques and web scraping technologies. Job Type: Full-time Pay: ₹20,000.00 - ₹32,000.00 per month Schedule: Day shift Education: Bachelor's (Preferred) Experience: total work: 1 year (Required) Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Data Science Intern (Paid) Company: Unified Mentor Location: Remote Duration: 3 months Application Deadline: 3rd June 2025 Opportunity: Full-time role based on performance + Internship Certificate About Unified Mentor Unified Mentor provides aspiring professionals with hands-on experience in data science through industry-relevant projects, helping them build successful careers. Responsibilities Collect, preprocess, and analyze large datasets Develop predictive models and machine learning algorithms Perform exploratory data analysis (EDA) to extract insights Create data visualizations and dashboards for effective communication Collaborate with cross-functional teams to deliver data-driven solutions Requirements Enrolled in or a graduate of Data Science, Computer Science, Statistics, or a related field Proficiency in Python or R for data analysis and modeling Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) Familiarity with data visualization tools like Tableau, Power BI, or Matplotlib Strong analytical and problem-solving skills Excellent communication and teamwork abilities Stipend & Benefits Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid) Hands-on experience in data science projects Certificate of Internship & Letter of Recommendation Opportunity to build a strong portfolio of data science models and applications Potential for full-time employment based on performance How to Apply Submit your resume and a cover letter with the subject line "Data Science Intern Application." Equal Opportunity: Unified Mentor welcomes applicants from all backgrounds. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Data Scientist – AI/ML & Big Data Location: Gurgaon (Hybrid) Experience: 7+ Years Job Type: Full-Time About the Role: We are seeking a Senior Data Scientist with 7+ years of experience in AI/ML, Big Data, and Cloud-based solutions to design, develop, and deploy scalable machine learning models. The ideal candidate will have strong expertise in Python, TensorFlow/PyTorch, Spark MLlib, and cloud platforms (GCP/AWS/Azure) . Key Responsibilities: ✔ Design & deploy AI/ML models for large-scale data processing. ✔ Develop Big Data pipelines using Spark, SQL, and cloud services (GCP preferred) . ✔ Perform statistical analysis, ETL, and data modeling for actionable insights. ✔ Collaborate with stakeholders to translate business problems into ML solutions . ✔ Optimize model performance, scalability, and efficiency in production. Must-Have Skills: ✔ 6-8 years in AI/ML, Big Data, and cloud-based deployments . ✔ Strong programming in Python, SQL, TensorFlow/PyTorch, Spark MLlib . ✔ Hands-on with GCP/AWS/Azure (GCP preferred) . ✔ Expertise in ETL, data warehousing, and statistical modeling . ✔ Degree in Computer Science, Data Science, Engineering, or related fields . Preferred Skills: ✔ Experience with MLOps, model deployment, and CI/CD pipelines . ✔ Strong data visualization & storytelling skills (Matplotlib, Tableau, etc.). Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

JOB DESCRIPTION • Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers • Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures • Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch • Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations • Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations • Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies • Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization • Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration • Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) • Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval • Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring • Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings • Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features • Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking • Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks • Must have: Experience running A/B tests and interpreting results for model impact • Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving • Should understand feature store concepts , embedding versioning , and serving pipelines • Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time • Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups • Must follow MLOps practices — version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

JOB DESCRIPTION • Strong in Python and experience with Jupyter notebooks , Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. • Must have: Experience with machine learning lifecycle , including data preparation , training , evaluation , and deployment • Must have: Hands-on experience with GCP services for ML & data science • Must have: Experience with Vector Search and Hybrid Search techniques • Must have: Experience with embeddings generation using models like BERT , Sentence Transformers , or custom models • Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN, Annoy) • Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) • Must have: Understanding of semantic vs lexical search paradigms • Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost, LightGBM with LTR support) • Should be proficient in SQL and BigQuery for analytics and feature generation • Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark • Should have experience deploying models and services using Vertex AI , Cloud Run , or Cloud Functions • Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch ) and blending with vector-based approaches • Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval • Good to have: Familiarity with TensorFlow Hub , Hugging Face , or other model repositories • Good to have: Experience with prompt engineering , context windowing , and embedding optimization for LLM-based systems • Should understand how to build end-to-end ML pipelines for search and ranking applications • Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k , recall , nDCG , MRR ) • Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI : Vertex AI, Vertex AI Matching Engine, AutoML, AI Platform Storage : BigQuery, Cloud Storage, Firestore Ingestion : Pub/Sub, Cloud Functions, Cloud Run Search : Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute : Cloud Run, Cloud Functions, Vertex Pipelines , Cloud Dataproc (Spark/PySpark) CI/CD & IaC : GitLab/GitHub Actions Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

About the Role We’re looking for an experienced AI Developer with hands-on expertise in Large Language Models (LLMs) , Azure AI services , and end-to-end ML pipeline deployment . If you’re passionate about building scalable AI solutions, integrating document intelligence, and deploying models in production using Azure, this role is for you. 💡 Key Responsibilities Design and develop AI applications leveraging LLMs (e.g., GPT, BERT) for tasks like summarization, classification, and document understanding Implement solutions using Azure Document Intelligence to extract structured data from forms, invoices, and contracts Train, evaluate, and tune ML models using Scikit-learn, XGBoost , or PyTorch Build ML pipelines and workflows using Azure ML , MLflow , and integrate with CI/CD tools Deploy models to production using Azure ML endpoints , containers, or Azure Functions for real-time AI workflows Write clean, efficient, and scalable code in Python and manage code versioning using Git Work with structured and unstructured data from SQL/NoSQL databases and Data Lakes Ensure performance monitoring and logging for deployed models ✅ Skills & Experience Required Proven experience with LLMs and Prompt Engineering (e.g., GPT, BERT) Hands-on with Azure Document Intelligence for OCR and data extraction Solid background in ML model development, evaluation , and hyperparameter tuning Proficient in Azure ML Studio , model registry , and automated ML workflows Familiar with MLOps tools such as Azure ML pipelines, MLflow , and CI/CD practices Experience with Azure Functions for building serverless, event-driven AI apps Strong coding skills in Python ; familiarity with libraries like NumPy, Pandas, Scikit-learn, Matplotlib Working knowledge of SQL/NoSQL databases and Data Lakes Proficiency with Azure DevOps , Git version control, and testing frameworks Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Calicut

On-site

GlassDoor logo

Company: Haris & Co Academy Location: Calicut, Kerala Job Summary: Haris & Co Academy is seeking a dynamic and skilled Python Django Full-Stack Mentor to lead learners through an AI-integrated full-stack development journey. This role involves delivering project-based training in Python, Django, React.js, MySQL, and other key technologies, with a special focus on AI-powered web development. As a mentor, you will guide students from fundamental programming concepts to deployment-ready full-stack applications, preparing them for professional roles in tech. Key Responsibilities: Mentorship & Guidance: Provide 1:1 and group mentorship to learners across all levels. Support students in debugging code, understanding concepts, and building real-world projects. Monitor student progress and offer tailored learning strategies. Curriculum Delivery: Deliver structured, project-based sessions across five modules: Python, Frontend (React.js), MySQL, Django, and API/Cloud deployment. Ensure understanding of both theoretical and practical aspects, including: Python basics to OOP and AI modules React fundamentals and advanced state management Django ORM, views, templates, authentication REST APIs with Django REST Framework Cloud deployment with PythonAnywhere Real-World Projects: Guide students through 6+ end-to-end projects, including an AI-Integrated Full-Stack Web Application. Review student code and provide actionable feedback on performance, structure, and best practices. AI & Industry Integration: Integrate Python libraries like NumPy, Pandas, and Matplotlib in web development contexts. Share real-world insights and use cases related to AI-powered web applications. Career Support: Assist in GitHub portfolio building, project documentation, and deploying apps. Guide students in resume writing, LinkedIn optimization, and interview preparation for developer roles. Operational Support: Support daily academic operations, contribute to curriculum updates, and assist in organizing workshops, hackathons, or bootcamps. Qualifications: Technical Skills: Strong expertise in Python, Django, Django REST Framework, React.js, MySQL, HTML/CSS/JS. Proficient in Python libraries for data analysis and visualization (NumPy, Pandas, Matplotlib). Understanding of RESTful API design, version control (Git/GitHub), and basic cloud deployment (e.g., PythonAnywhere, Heroku). Bonus: Knowledge of asynchronous programming, image processing, and AI model integration. Soft Skills: Excellent communication and presentation skills. Passion for teaching and mentoring learners. Ability to simplify complex concepts and encourage project-based learning. Preferred: Prior experience as a trainer, mentor, or instructor in an academic/bootcamp setting. Familiarity with agile workflows and edtech platforms. Experience deploying web applications and integrating third-party APIs. Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Calicut

On-site

GlassDoor logo

Company: Haris & Co Academy Location: Calicut, Kerala Job Summary: Haris & Co Academy is seeking a passionate and knowledgeable Data Analyst Mentor to train and support aspiring data professionals. In this role, you will lead sessions on data tools and techniques including Excel, SQL, Python, Python Libraries, Power BI, and Tableau, while guiding students through real-world projects and preparing them for careers in the data analytics field. You’ll also assist in maintaining academic quality, fostering practical skills, and supporting operational needs. Key Responsibilities: Mentorship: Provide one-on-one and group mentorship to students, helping them understand core data concepts. Curriculum Delivery: Deliver sessions on Excel, SQL, Python for Data Analysis, Tableau, and Power BI. Facilitate hands-on exercises and real-time case studies to reinforce learning. Industry Insights: Share practical use cases, industry trends, and tools used in modern data analytics roles. Skills Development: Train students in data collection, cleaning, analysis, visualization, and dashboard creation. Strengthen skills in storytelling with data and data-driven decision-making. Project Review: Evaluate student projects and dashboards for clarity, accuracy, and presentation standards. Career Support: Guide students in portfolio creation, LinkedIn optimization, and interview preparation. Provide insights into hiring trends and job roles in data analytics. Operational Support: Support academic and office-related tasks as needed to ensure smooth operations. Qualifications: Technical Skills: Proficient in Microsoft Excel, SQL, Python (Pandas, NumPy, Matplotlib, Seaborn), Power BI, and Tableau. Prefer Knowledge in Machine Learning Libraries. Solid understanding of data analytics workflows, statistics, and data storytelling. Communication & Mentorship: Strong communication and interpersonal skills. Passion for mentoring and teaching aspiring data professionals. Preferred: Familiarity with Jupyter Notebooks, Git/GitHub. Exposure to basic machine learning concepts is a plus. Prior teaching, training, or mentoring experience in an academic or bootcamp environment. Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Responsibilities Implement and maintain customer analytics models including CLTV prediction, propensity modelling, and churn prediction Support the development of customer segmentation models using clustering techniques and behavioural analysis Assist in building and maintaining survival models to analyze customer lifecycle events Work with large-scale datasets using BigQuery and Snowflake Develop and validate machine learning models using Python and cloud-based ML platforms, specifically BQ ML, ModelGarden and Amazon Bedrock Help transform model insights into actionable business recommendations Collaborate with analytics and activation teams to implement model outputs Present analyses to stakeholders in clear, actionable formats Qualifications Bachelor's or master’s degree in Statistics, Mathematics, Computer Science, or related quantitative field 2-4 years’ experience in applied data science, preferably in marketing/retail Experience in developing and implementing machine learning models Strong understanding of statistical concepts and experimental design Ability to communicate technical concepts to non-technical audiences Familiarity with agile development methodologies Technical Skills Advanced proficiency in: SQL and data warehouses (BigQuery, Snowflake) Python for statistical modeling Machine learning frameworks (scikit-learn, TensorFlow) Statistical analysis and hypothesis testing Data visualization tools (Matplotlib, Seaborn) Version control systems (Git) Understanding of Google Cloud Function and Cloud Run Experience With Customer lifetime value modeling RFM analysis and customer segmentation Survival analysis and hazard modeling A/B testing and causal inference Feature engineering and selection Model validation and monitoring Cloud computing platforms (GCP/AWS/Azure) Key Projects & Deliverables Support development and maintenance of CLTV models Contribute to customer segmentation models incorporating behavioral and transactional data Implement survival models to predict customer churn Support the development of attribution models for marketing effectiveness Help develop recommendation engines for personalized customer experiences Assist in creating automated reporting and monitoring systems Soft Skills Strong analytical and problem-solving abilities Good communication and presentation skills Business acumen Collaborative team player Strong organizational skills Ability to translate business problems into analytical solutions Growth Opportunities Work on innovative data science projects for major brands Develop expertise in cutting-edge ML technologies Learn from experienced data science leaders Contribute to impactful analytical solutions Opportunity for career advancement We offer competitive compensation, comprehensive benefits, and the opportunity to work with leading brands while solving complex analytical challenges. Join our team to grow your career while making a significant impact through data-driven decision making. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python or R for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 2nd June 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀 Show more Show less

Posted 2 weeks ago

Apply

Exploring matplotlib Jobs in India

Matplotlib is a popular data visualization library in Python that is widely used in various industries. Job opportunities for matplotlib professionals in India are on the rise due to the increasing demand for data visualization skills. In this article, we will explore the job market for matplotlib in India and provide insights for job seekers looking to build a career in this field.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for matplotlib roles: 1. Bangalore 2. Delhi 3. Mumbai 4. Hyderabad 5. Pune

Average Salary Range

The average salary range for matplotlib professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10 lakhs per annum.

Career Path

Career progression in the field of matplotlib typically follows a path from Junior Developer to Senior Developer to Tech Lead. As professionals gain more experience and expertise in data visualization using matplotlib, they can take on more challenging roles and responsibilities.

Related Skills

In addition to proficiency in matplotlib, professionals in this field are often expected to have knowledge and experience in the following areas: - Python programming - Data analysis - Data manipulation - Statistics - Machine learning

Interview Questions

Here are 25 interview questions for matplotlib roles: - What is matplotlib and how is it used in data visualization? (basic) - What are the different types of plots that can be created using matplotlib? (basic) - How would you customize the appearance of a plot in matplotlib? (medium) - Explain the difference between plt.show() and plt.savefig() in matplotlib. (medium) - How do you handle missing data in a dataset before visualizing it using matplotlib? (medium) - What is the purpose of the matplotlib.pyplot.subplots() function? (advanced) - How would you create a subplot with multiple plots in matplotlib? (medium) - Explain the use of matplotlib.pyplot.bar() and matplotlib.pyplot.hist() functions. (medium) - How can you annotate a plot in matplotlib? (basic) - Describe the process of creating a 3D plot in matplotlib. (advanced) - How do you set the figure size in matplotlib? (basic) - What is the purpose of the matplotlib.pyplot.scatter() function? (medium) - How would you create a line plot with multiple lines using matplotlib? (medium) - Explain the difference between plt.plot() and plt.scatter() in matplotlib. (medium) - How do you add a legend to a plot in matplotlib? (basic) - Describe the use of color maps in matplotlib. (medium) - How can you save a plot as an image file in matplotlib? (basic) - What is the purpose of the matplotlib.pyplot.subplots_adjust() function? (medium) - How do you create a box plot in matplotlib? (medium) - Explain the use of matplotlib.pyplot.pie() function for creating pie charts. (medium) - How would you create a heatmap in matplotlib? (advanced) - What are the different types of coordinate systems in matplotlib? (advanced) - How do you handle axis limits and ticks in matplotlib plots? (medium) - Explain the role of matplotlib.pyplot.imshow() function. (medium) - How would you create a bar plot with error bars in matplotlib? (advanced)

Closing Remark

As the demand for data visualization skills continues to grow, mastering matplotlib can open up exciting job opportunities in India. By preparing thoroughly and showcasing your expertise in matplotlib, you can confidently apply for roles and advance your career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies