Jobs
Interviews

1441 Matplotlib Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelor’s degree in Computer Science, Data Science, Statistics, or related field. Master’s degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently. S YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Candidate Application Notice

Posted 1 month ago

Apply

2.0 years

3 - 9 Lacs

Chennai

On-site

We are looking for qualified people who can develop scalable solutions to complex real-world problems using AI/ML, Big Data, Statistics, Econometrics, and Optimization. Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Minimum Qualifications Bachelor’s degree in Analytics, Computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or PowerBI. 1+ years' experience in SQL programming language and relational databases. Preferred Qualifications An MS/PhD in Analytics, Computer Science, Operational research, Statistics, Applied mathematics or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Gemini, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and AI/ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness

Posted 1 month ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are looking for qualified people who can develop scalable solutions to complex real-world problems using AI/ML, Big Data, Statistics, Econometrics, and Optimization. Potential candidates should have excellent depth and breadth of knowledge in machine learning, data mining, and statistical modeling. They should possess the ability to translate a business problem into an analytical problem, identify the relevant data sets needed for addressing the analytical problem, recommend, implement, and validate the best suited analytical algorithm(s), and generate/deliver insights to stakeholders. Candidates are expected to regularly refer to research papers and be at the cutting-edge with respect to algorithms, tools, and techniques. The role is that of an individual contributor; however, the candidate is expected to work in project teams of 2 to 3 people and interact with Business partners on regular basis. Responsibilities Understand business requirements and analyze datasets to determine suitable approaches to meet analytic business needs and support data-driven decision-making Design and implement data analysis and AI/ML models, hypotheses, algorithms and experiments to support data-driven decision-making Apply various analytics techniques like data mining, predictive modeling, prescriptive modeling, math, statistics, advanced analytics, machine learning models and algorithms, etc.; to analyze data and uncover meaningful patterns, relationships, and trends Design efficient data loading, data augmentation and data analysis techniques to enhance the accuracy and robustness of data science and machine learning models, including scalable models suitable for automation Research, study and stay updated in the domain of data science, machine learning, analytics tools and techniques etc.; and continuously identify avenues for enhancing analysis efficiency, accuracy and robustness Qualifications Minimum Qualifications Bachelor’s degree in Analytics, Computer science, Operational research, Statistics, Applied mathematics, or in any other engineering discipline. 2+ years of hands-on experience in Python programming for data analysis, machine learning, and with libraries such as NumPy, Pandas, Matplotlib, Scikit-learn, TensorFlow, PyTorch, NLTK, spaCy, and Gensim. 2+ years of experience with both supervised and unsupervised machine learning techniques. 2+ years of experience with data analysis and visualization using Python packages such as Pandas, NumPy, Matplotlib, Seaborn, or data visualization tools like Dash or PowerBI. 1+ years' experience in SQL programming language and relational databases. Preferred Qualifications An MS/PhD in Analytics, Computer Science, Operational research, Statistics, Applied mathematics or in any other engineering discipline. PhD strongly preferred. Experience working with Google Cloud Platform (GCP) services, leveraging its capabilities for ML model development and deployment. Experience with Git and GitHub for version control and collaboration. Besides Python, familiarity with one more additional programming language (e.g., C/C++/Java) Strong background and understanding of mathematical concepts relating to probabilistic models, conditional probability, numerical methods, linear algebra, neural network under the hood detail. Experience working with large language models such GPT-4, Gemini, Palm, Llama-2, etc. Excellent problem solving, communication, and data presentation skills

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

What We Offer 6 Months Internship followed with job opportunity Hands-on training and mentorship from industry experts. Opportunity to work on live projects and client interactions. A vibrant and learning-driven work culture. 5 days a week & Flexible work timings Job Summary We are seeking motivated and enthusiastic AI & ML Interns or Freshers to support our AI and machine learning initiatives. As an intern or fresher, you will have the opportunity to work alongside our experienced data scientists and engineers, gaining hands-on experience in developing, implementing, and optimizing AI and ML models. Key Responsibilities Assist in the development and testing of machine learning models and algorithms. Conduct data preprocessing and data cleaning tasks to prepare datasets for analysis. Implement, evaluate, and optimize AI models using state-of-the-art techniques. Collaborate with team members to integrate machine learning models into applications and systems. Participate in the research and development of new AI and ML technologies. Perform data analysis and visualization to support decision-making processes. Document processes, methodologies, and findings in a clear and concise manner. Stay updated with the latest trends and advancements in AI and machine learning. Requirements Recent graduates from the BE/B.Tech background of the 2024/25 academic year. Basic understanding of machine learning concepts and algorithms. Understanding of programming languages such as Python, R, or similar. Familiarity with machine learning frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, etc. Familiarity with data analysis and visualization tools like Pandas, NumPy, Matplotlib, or similar. Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Self-motivated and eager to learn new technologies and techniques.

Posted 1 month ago

Apply

1.0 years

1 Lacs

India

On-site

Our Culture & Values: We’d describe our culture as human, friendly, engaging, supportive, agile, and super collaborative. At Kainskep Solutions, our five values underpin everything we do, from how we work to how we delight and deliver to our customers. Our values are #TeamMember, #Ownership, #Innovation, #Challenge, and #Collaboration. What makes a great team? A Diverse Team! Don’t be put off if you don’t tick all the boxes; we know from research that candidates may not apply if they don’t feel they are 100% there yet; the essential experience we need is the ability to engage clients and build strong, effective relationships. If you don’t tick the rest, we would still love to talk. We’re committed to creating a diverse and inclusive. What you’ll bring: Use programming languages like Python, R, and SQL for data manipulation, statistical analysis, and machine learning tasks. Apply fundamental statistical concepts such as mean, median, variance, probability distributions, and hypothesis testing to analyze data. Develop supervised and unsupervised machine learning models, including classification, regression, clustering, and dimensionality reduction techniques. Evaluate model performance using metrics such as accuracy, precision, recall, and F1-score, implementing cross-validation techniques to ensure reliability. Conduct data manipulation and visualization using libraries such as Pandas, Matplotlib, Seaborn, and ggplot2, implementing data cleaning techniques to handle missing values and outliers. Perform exploratory data analysis, feature engineering, and data mining tasks including text mining, natural language processing (NLP), and web scraping. Familiarize yourself with big data technologies such as Apache Spark and Hadoop, understanding distributed computing concepts to handle large-scale datasets effectively. Manage relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra) for data storage and retrieval. Use version control systems like Git and GitHub/GitLab for collaborative development, understanding branching, merging, and versioning workflows. Demonstrate basic knowledge of the software development lifecycle, Agile methodologies, algorithms, and data structures. Requirements: Bachelor’s degree or higher in Computer Science, Statistics, Mathematics, or a related field. Proficiency in programming languages such as Python, R, and SQL. Strong analytical skills and a passion for working with data. Ability to learn quickly and adapt to new technologies and methodologies. Prior experience with data analysis, machine learning, or related fields is a plus. Good To Have: Experience in Computer Vision, including Image Processing and Video Processing. Familiarity with Generative AI techniques, such as Generative Adversarial Networks (GANs), and their applications in image, text, and other data generation tasks. Knowledge of Large Language Models (LLMs) is a plus. Experience with Microsoft AI technologies, including Azure AI Studio and Azure Copilot Studio. Job Type: Fresher Pay: ₹10,000.00 per month Benefits: Flexible schedule Schedule: Monday to Friday Ability to commute/relocate: Vaishali Nagar, Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data science: 1 year (Required) Work Location: In person Expected Start Date: 14/07/2025

Posted 1 month ago

Apply

3.0 years

3 - 5 Lacs

Chhattisgarh, India

On-site

Python Instructor About The Opportunity We are a fast-growing IT training and upskilling provider operating in the Professional Education & Technology Services sector. Specializing in enterprise software development curricula, we equip fresh graduates and working engineers with production-ready skills demanded by global clients. Our classrooms, labs, and project-based programs are designed to bridge the gap between academic knowledge and real-world software engineering standards. Role & Responsibilities Deliver immersive, hands-on Python sessions to entry-level and mid-career engineers in an on-site classroom setting. Design, update, and version-control courseware covering core Python, OOP, data structures, web frameworks, and testing. Lead live coding demos, code reviews, and hackathon-style projects that mimic industry SDLC workflows. Evaluate learner progress through quizzes, pair-programming assessments, and capstone project grading. Collaborate with placement and industry liaison teams to align content with current hiring requirements. Mentor trainees on best practices in git, Agile, and problem-solving to raise job-readiness metrics. Skills & Qualifications Must-Have 3+ years professional Python development in production environments. Solid understanding of OOP, data structures, algorithms, and design patterns. Experience with at least one Python web framework (Django or Flask). Prior classroom, bootcamp, or corporate training facilitation experience. Excellent verbal communication and live coding fluency. Proficiency with git, unit testing, and virtual environments. Preferred Exposure to data science libraries (Pandas, NumPy, Matplotlib). Knowledge of cloud deployment on AWS or Azure. Hands-on with CI/CD tools and Docker. Certification in training or instructional design. Experience tailoring content for BFSI or Telecom domains. Contributions to open-source Python projects. Benefits & Culture Highlights Cutting-edge lab infrastructure and dedicated TA support. Clearly defined trainer career ladder with sponsored certifications. Collaborative, learner-centric ethos that rewards innovation in pedagogy. Location: On-site, India. Full-time. Skills: data structures,azure,numpy,oop,ci/cd,public speaking,curriculum design,django,unit testing,git,aws,docker,python,matplotlib,assessment design,flask,virtual environments,pandas,design patterns,algorithms

Posted 1 month ago

Apply

3.0 years

3 - 5 Lacs

Raipur, Chhattisgarh, India

On-site

Python Instructor About The Opportunity We are a fast-growing IT training and upskilling provider operating in the Professional Education & Technology Services sector. Specializing in enterprise software development curricula, we equip fresh graduates and working engineers with production-ready skills demanded by global clients. Our classrooms, labs, and project-based programs are designed to bridge the gap between academic knowledge and real-world software engineering standards. Role & Responsibilities Deliver immersive, hands-on Python sessions to entry-level and mid-career engineers in an on-site classroom setting. Design, update, and version-control courseware covering core Python, OOP, data structures, web frameworks, and testing. Lead live coding demos, code reviews, and hackathon-style projects that mimic industry SDLC workflows. Evaluate learner progress through quizzes, pair-programming assessments, and capstone project grading. Collaborate with placement and industry liaison teams to align content with current hiring requirements. Mentor trainees on best practices in git, Agile, and problem-solving to raise job-readiness metrics. Skills & Qualifications Must-Have 3+ years professional Python development in production environments. Solid understanding of OOP, data structures, algorithms, and design patterns. Experience with at least one Python web framework (Django or Flask). Prior classroom, bootcamp, or corporate training facilitation experience. Excellent verbal communication and live coding fluency. Proficiency with git, unit testing, and virtual environments. Preferred Exposure to data science libraries (Pandas, NumPy, Matplotlib). Knowledge of cloud deployment on AWS or Azure. Hands-on with CI/CD tools and Docker. Certification in training or instructional design. Experience tailoring content for BFSI or Telecom domains. Contributions to open-source Python projects. Benefits & Culture Highlights Cutting-edge lab infrastructure and dedicated TA support. Clearly defined trainer career ladder with sponsored certifications. Collaborative, learner-centric ethos that rewards innovation in pedagogy. Location: On-site, India. Full-time. Skills: data structures,azure,numpy,oop,ci/cd,public speaking,curriculum design,django,unit testing,git,aws,docker,python,matplotlib,assessment design,flask,virtual environments,pandas,design patterns,algorithms

Posted 1 month ago

Apply

0 years

12 - 18 Lacs

Pune, Maharashtra, India

On-site

Role Definition: Data Scientists focus on researching and developing AI algorithms and models. They analyse data, build predictive models, and apply machine learning techniques to solve complex problems Skills Proficient: Languages/Framework: Fast API, Azure UI Search API (React) Databases and ETL: Cosmos DB (API for MongoDB), Data Factory Data Bricks Proficiency in Python and R Cloud: Azure Cloud Basics (Azure DevOps) Gitlab: Gitlab Pipeline Ansible and REX: Rex Deployment Data Science: Prompt Engineering + Modern Testing Data mining and cleaning ML (Supervised/unsupervised learning) NLP techniques, knowledge of Deep Learning techniques include RNN, transformers End-to-end AI solution delivery AI integration and deployment AI frameworks (PyTorch) MLOps frameworks Model deployment processes Data pipeline monitoring Expert: (in addition to proficient skills) Languages/Framework: Azure Open AI Data Science: Open AI GPT Family of models 4o/4/3, Embeddings + Vector Search Databases and ETL: Azure Storage Account Expertise in machine learning algorithms (supervised, unsupervised, reinforcement learning) Proficiency in deep learning frameworks (TensorFlow, PyTorch) Strong mathematical foundation (linear algebra, calculus, probability, statistics) Research methodology and experimental design Proficiency in data analysis tools (Pandas, NumPy, SQL) Strong statistical and probabilistic modelling skills Data visualization skills (Matplotlib, Seaborn, Tableau) Knowledge of big data technologies (Spark, Hive) Experience with AI-driven analytics and decision-making systems Skills: statistical modelling,data scientist,fastapi,nlp techniques,azure cloud basics,cosmos db,azure,ansible,sql,cosmos db (api for mongodb),unsupervised learning,gitlab pipeline,azure ui search api (react),reinforcement learning,modern testing,python,rex deployment,end-to-end ai solution delivery,ai-driven analytics,azure storage account,azure devops,open ai gpt family of models,transformers,data factory data bricks,numpy,data bricks,azure open ai,prompt engineering,etl,data pipeline monitoring,fast api,data factory,model deployment processes,tensorflow,data visualization (matplotlib, seaborn, tableau),pandas,ai integration and deployment,big data technologies (spark, hive),supervised learning,r,rnn,mlops frameworks,data mining and cleaning,pytorch,deep learning techniques

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Your Role and Responsibilities • 3+ years of experience in a Data Science, machine learning or a related field. • Strong hands-on experience in Machine Learning and Statistics focusing on structured and unstructured data problems. • Practical experience in several of the following areas: time series forecasting, clustering and classification techniques, regression, boosting algorithms, optimization techniques, NLP, recommendation systems, ElasticNet Excellent programming skills preferably in Python/Py spark and SQL • Understanding of developing, implementing, deploying machine learning models on the cloud platforms(Azure, AWS, GCP) by using AWS/Azure Machine Learning, Data bricks, or other relevant cloud services • Integrate machine learning models into existing systems and applications, ensuring seamless functionality and data flow • Understanding of developing and maintaining MLOps pipelines for automated model training, testing, deployment, and monitoring • Understanding of monitoring and analysing model performance, providing reports and insights to stakeholders as needed • Familiarity with data processing and storage tools, such as SQL, Hadoop, or Spark Advanced engineering abilities to deliver flexible and scalable end-to-end machine learning solutions. • Exposure to data visualization software and packages (Power BI, Tableau, matplotlib, d3) Understands challenges in business area, applicability of relevant data science disciplines, and system interactions. • Excellent written and verbal communication skills, confidence in presenting ideas and findings to stakeholders, and ability to do so at the right level of detail. Required Technical and Professional Expertise • Engineering Graduate from a reputed institute and/or Masters in Statistics, MBA • 3+ years of Data science experience • Strong expertise and deep understanding of machine learning. • Strong understanding of SQL & Python. Knowledge of Power BI or Tableau is a plus • Exposure to Industry specific (CPG, Manufacturing) use cases is required • Strong client-facing skills • Must be organized and detail oriented. • Excellent communication and interpersonal skills Preferred Technical and Professional Experience • Strong foundation in Supervised and Unsupervised Learning (Regression, Classification, Clustering, etc.). • Proficiency in Ensemble Learning (Random Forest, Gradient Boosting, XGBoost, LightGBM, etc.). • Experience in fine-tuning Large Language Models (LLMs) and working with open-source models (Llama, GPT, BERT, etc.). • Familiarity with Prompt Engineering, RAG (Retrieval-Augmented Generation), and Fine-tuning techniques. • Hands-on experience with Cloud Platforms (AWS, GCP, Azure) for ML model deployment. Familiarity with MLOps and Model Deployment using Kubernetes, Docker, and MLflow. About Polestar: As a data analytics and enterprise planning powerhouse, Polestar Solutions helps its customers bring out the most sophisticated insights from their data in a value-oriented manner. From analytics foundation to analytics innovation initiatives, we offer a comprehensive range of services that help businesses succeed with data. We have a geographic presence in the United States (Dallas, Manhattan, New York, Delaware), UK(London) & India (Delhi-NCR, Mumbai, Bangalore & Kolkata) and have 600+ people strong world-class team. We are growing at a rapid pace and plan to double our growth each year. This provides immense growth and learning opportunities to those who are choosing to work with Polestar. We hire from most of the Premier Undergrad and MBA institutes. We are serving customers across 20+ countries. Our expertise and deep passion for what we do has brought us many accolades. The list includes: - • Recognized as the Top 50 Companies for Data Scientists in 2023 by AIM. • Financial Times awarded Polestar as High-Growth Companies across Asia-Pacific for a 5th time in a row in 2023. • Featured on the Economic Times India's Growth Champions in FY2023. • Polestar Solutions Selected as a 2022 Red Herring Global Companies. • Top Data Science Providers in India 2023: Penetration and Maturity (PeMa) Quadrant. • India’s most promising data science companies in 2022 by Analytics Insight. • Featured on Forrester's Now Tech: Customer Analytics Service Providers Report Q2, 2021. • Recognized as Anaplan's India RSI Partner of the Year FY21. • Elite Qlik Partner and a member of the ‘Qlik Partner Advisory Council’ & Microsoft Gold Partners for Data & Cloud Platforms Culture at Polestar. We have one of the most progressive people’s practices which are all aimed at enabling fast paced growth for those who deserve it.

Posted 1 month ago

Apply

3.0 years

20 - 30 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of the Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 3 years Location: Bengaluru JobType: full-time We are seeking a highly motivated and results-driven Data Scientist with 3 to 6 years of experience to join our growing data science team. In this role, you will work at the intersection of business and technology, transforming raw data into actionable insights and intelligent models that drive strategic decision-making. Your deep understanding of machine learning algorithms, strong command over Python, and proficiency in SQL will be critical to solving complex business problems and enabling data-driven innovation across the organization. Requirements Key Responsibilities: Develop, implement, and optimize machine learning models to solve real-world business problems such as customer segmentation, recommendation systems, predictive analytics, churn prediction, and more. Translate business requirements into analytical models and data products that align with company goals. Clean, preprocess, and manipulate large datasets from various structured and unstructured sources using Python and SQL. Build data pipelines and automate data workflows for scalable model deployment and monitoring. Collaborate closely with cross-functional teams including data engineering, product, marketing, and operations to define problem statements, prioritize tasks, and deliver solutions. Present findings, models, and recommendations to both technical and non-technical stakeholders through clear visualizations and storytelling. Continuously research and stay updated on the latest trends in machine learning, statistical modeling, and data science techniques to apply best practices and enhance model performance. Ensure the reliability, accuracy, and explainability of models, and participate in code reviews, model validation, and documentation. Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, or a related quantitative field. 3 to 6 years of hands-on experience in data science or machine learning roles. Strong proficiency in Python and its data science libraries (such as Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn, TensorFlow or PyTorch). Solid experience with SQL for data querying, manipulation, and aggregation across large datasets. Demonstrated ability to build and deploy end-to-end machine learning models in production environments. Strong understanding of statistical analysis, data modeling, and algorithm design. Experience working with large-scale datasets, preferably in cloud environments (AWS, GCP, or Azure). Familiarity with tools like Jupyter Notebooks, Git, and CI/CD pipelines is a plus. Excellent problem-solving, critical thinking, and communication skills. Ability to work independently in a fast-paced, dynamic environment while collaborating effectively within a team

Posted 1 month ago

Apply

5.0 years

20 - 30 Lacs

Chennai, Tamil Nadu, India

Remote

Experience : 5.00 + years Salary : INR 2000000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Chennai) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SOL-X) (*Note: This is a requirement for one of Uplers' client - SOL-X) What do you need for this opportunity? Must have skills required: Communication, ETL, Health domain, operational analytics., safety Domain, Data Analyst, Data Handeling, Metabase, NoSQL databases, Pandas/NumPy/scikit-learn, TensorFlow or PyTorch, Python, Tableau SOL-X is Looking for: About Company Data Analyst Job Description SOL-X is a Maritime Safety and Crew Wellbeing solution for a holistic crew well-being framework, empowering workers to use digital products that actively support the physical, mental, and emotional well-being. Through advanced analytics and behavioural modelling, SOL-X provides deep insights into how to improve operational safety and empower remote workers to manage their wellbeing. About Role We are looking for an experienced Data Analyst to lead our predictive analytics initiatives. You will be responsible for developing robust models and interactive dashboards to extract actionable insights from large, enterprise-level data sets. This role involves end-to-end data management—from collection and preprocessing to model development and deployment—using modern tools and technologies to support data-driven business decisions in a safety-critical domain. Responsibilities Data Acquisition & Preparation: Collect, preprocess, and clean large, complex datasets from multiple sources. Develop and manage robust data pipelines to ensure seamless data ingestion and quality. Model Development & Experimentation: Build, test, and implement predictive and explanatory models using machine learning frameworks such as scikit-learn, TensorFlow, or PyTorch. Design and conduct experiments to validate models and continuously improve performance. Dashboard & Reporting: Develop interactive dashboards using visualization tools (e.g., Tableau, Metabase, Matplotlib, Seaborn) that enable stakeholders to make informed decisions. Prepare detailed reports and presentations to communicate insights, trends, and recommendations. Collaboration & Continuous Improvement: Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Stay up-to-date with emerging trends and technologies in data science and machine learning, integrating best practices into ongoing projects. Qualifications 5–7 years of experience in data collection, preprocessing, and analysis of large enterprise-level data sets. Proven expertise in statistical analysis, predictive modeling, and explanatory modeling techniques. Strong proficiency in Python and experience with libraries such as Pandas, NumPy, scikit-learn, and frameworks like TensorFlow or PyTorch. Extensive experience with data visualization tools such as Tableau, Metabase, Matplotlib, and Seaborn. Hands-on experience working with NoSQL databases. Demonstrated ability to extract meaningful insights from complex data sets and effectively communicate them to stakeholders. Nice To Have / Preferred Qualifications Experience with building ETL pipelines is a plus. Prior experience in data engineering and managing data pipelines. Background in safety or health analytics projects. Excellent communication skills and a collaborative mindset. If you’re passionate about leveraging data to drive operational excellence and support strategic decision- making, we’d love to hear from you. Join Sol-X and be part of a dynamic team committed to making a significant impact! Engagement Type: Direct hire on SOL-X Payroll Job Type: Permanent Location: Chennai - Complete Onsite Working time: 9:00 AM to 6:00 PM Interview Process - 3- 4 Rounds How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Title ROLE DESCRIPTION Code Manager - Data Science TBA Role Holder (if Currently Filled) Role archetype TBA Individual Contributor Division/Department Grade/Level Finance - Retail Analytics Unit TBD Reporting to Location Senior Manager/Manager Analytics India Managing/Leading (if Applicable) Date of last revision N/A May 2022 Role Purpose The Manager - Data Science is responsible for supporting on developing the product vision and roadmap in close collaboration with our product owners, data engineers and data analysts who are spearheading the Advanced Analytics transformation across Majid Al Futtaim. Majid Al Futtaim Retail is continuing to develop and build its analytics talent to support its advanced analytics agenda. The various analytics use cases launched by the Retail Analytics Unit on a start-up mode will establish the unit’s foundation to develop a powerful product portfolio following it’s sustainable growth. As such, team members are given the latitude to shape the trajectory of the unit and bring their ideas and visions into fruition. He/she will be at the forefront of enhancing the Advanced Analytics value proposition, in line with the long-term digital and non-digital objectives of Majid Al Futtaim. Role Details – Key Responsibilities And Accountabilities Designing, Testing & Implementing data models Design methodologies to estimate business value and track adoption of the developed solutions Leverage expertise in quantitative analysis, data mining, and data manipulation to develop high quality, advanced statistical models, and partner with product owners advising on approach and solutioning Enhance new algorithms to address structured/unstructured Retail Analytics problems, and improve existing algorithms to achieve data-driven decision making Lead the development/enhancement of scalable, advanced models for new and existing solutions; and validate and optimize models’ performance Lead the solution development process from proof of concept through to deployment stage Run experiments to assess model’s results, analyze project’s key metrics and develop impact measurement framework Ideate and develop proof of concepts for new analytics initiatives having a customer-centric approach. Build new and creative analytics products with the aim of optimizing the user experience and business operations Actively participate in squad meetings, update teams on the progress using proper sprint documents, communicate effectively with technical and non-technical audience elaborating on the models and recommending data-driven solutions Develop talent and build the Data Scientist/Analyst NextGen skilled and fully engaged team members Coordination and Communication Act as a thought partner to the analytics team and other key stakeholders, to identify the scope of improvement and drive the right processes to deliver on the business objectives Liaise with the product team to implement new modules, maintain and release production pipelines in a timely and responsible manner Ensure regular information exchange with all relevant stakeholders and update them on the development progress across projects Contribute to the development of presentations on advanced analytics and performance in key areas of the business and communicate results across the organisation Build relationships and maintain strong partnerships with key personnel to help achieve organisational goals Collaborate with key stakeholders to ensure clarity of the specifications and expectations of the Retail Analytics Unit Audit and Reporting Responsible for the preparation of business presentations and reports related to Retail Analytics for various stakeholders, on periodic and ad hoc basis as and when required Support on performing regular audit of various processes and databases for the Retail Analytics Unit in order to identify gaps and risks, and propose corrective actions Policies and Procedures Support on developing and reviewing the Retail Analytics policies and procedures and ensure it is implemented and reported on as a part of the policies and procedures for the Retail Analytics Unit Support on the development and implementation of relevant policies and procedures Human Capital Responsibilities Assist with the implementation of the performance management process by setting objectives, monitoring performance, and provide constructive feedback and provide inputs to senior management Provide mentorship for the purpose of developing a continuous talent pipeline for key roles Provide inputs on training needs and coordinate with the HC department to ensure facilitation of training requirements Develop and implement on the job-training for the team Provide inputs for the development of annual manpower plan Ensure the implementation of MAF Retail’s corporate policies and relevant procedures Disclaimer: This role description reflects the general details considered necessary to describe the principal responsibilities of the role identified and shall not be construed as an exhaustive description of all the work requirements inherent to success in the role. Definition of Success To Be Added Other Context (if Applicable) N/A Functional/Technical Competencies To Be Added Minimum Qualifications/education Personal Characteristics and Required Background: Bachelor’s Degree in an IT-related field or Computer Engineering Master’s Degree in a similar field is preferred Minimum Experience 5-7 years in a senior Data Scientist role in an international environment; building advanced analytics models/solutions/products with the ability to demonstrate value and track record 2+ years in the retail/FMCG business is preferred Skills Experience in several visualization tools such as Tableau, PowerBI, Qlik, BO Experienced in Supply Chain Analytics with a strong understanding of Demand Forecasting, Inventory Planning, and Order Management. Proficient in leveraging advanced analytics, machine learning, and optimization techniques to drive data-driven decision-making, improve forecast accuracy, optimize stock levels, and enhance order fulfillment efficiency Advanced proficiency in various programming languages is a must, such as R/SAS/Python/SQL/BigQuery Advanced knowledge of algorithm/modeling techniques is a must, such as Logistic Regression, Linear Regression, Clustering, Decision Trees, Randfom Forest, SVM, KNN etc. Advanced experience in deploying machine learning models on cloud services (Azure, AWS, GC etc.) Advanced experience in time series forecasting, boosting algorithms, optimization techniques, NLP, recommendation systems, ElasticNet Experience in data visualization software and packages (Prower BI, matplotlib, d3, highcharts) Advanced experience in Azure, Spark, and git-Basic as well as understanding of web application framework (Django, Flask, HTML, JavaScript, CSS, Ajax, jQuery etc.) Collaborative, pragmatic and proactive problem solver Proven ability to deliver initiatives from conception through to completion, sound understanding of the analytics ecosystem and value chain from both a business and a technical standpoint Ability to work independently and in cross-functional teams Strong business communication and presentation skills with proven experience managing executive-level communications Excellent organizational skills with the ability to prioritize workload Strong English language skills (Speaking, Reading and Writing) with exceptional business writing, Arabic is a plus Proficient in MS Office (Excel, Word, PowerPoint) Signature Of Role Holder Approved By Head Of Division/Department/Sec Head of Human Capital:

Posted 1 month ago

Apply

0 years

0 Lacs

Gurgaon

On-site

About FarMart: FarMart is a modern food supply network connecting farming communities, food businesses, and consumers. We are seamlessly integrating food value chains. We source produce scalably via our first-mile platform, optimize processing through an asset-light model, and subsequently distribute finished food digitally. By consolidating complex supply and distribution channels on a single platform, we are changing the way food is bought and sold in India and the world. Our mission is to create more resilient, reliable, and rewarding food value chains for humanity. At FarMart, we're dedicated to building the good food economy. We're proud to be backed by renowned venture capitalists, including General Catalyst, Matrix Partners, Omidyar Network, and Avaana Capital, who invest in sustainable and purpose-driven tech companies. Our trusted partners include industry leaders like ITC, Sugna, Adani, Olam, Britannia, Glencoe, and Coffeco, among many others. Founded by childhood friends Alekh Sanghera and Mehtab Singh Hans in 2015, FarMart set out to create a scalable tech solution that would make farming a reputable, profitable, and preferred profession for the next generation. Since our launch in 2015, we've established partnerships with over 230,000 farm aggregators and have positively impacted the lives of 3.2 million farmers and more than 2,000 food businesses worldwide. To learn more about us, you can refer to the following media coverage: Moneycontrol Hindu Business Line YourStory About Us: At FarMart, we leverage cutting-edge data science techniques to create solutions that drive business success. As a rapidly growing company in [industry], we are seeking a Data Science Intern to join our team and gain real-world experience in applying data analytics and machine learning. What you will do: Work with data teams to collect, clean, and prepare large datasets for analysis. Apply machine learning techniques to build predictive models and algorithms. Conduct exploratory data analysis (EDA) and create data visualizations to communicate insights. Support the implementation of models into production environments. Collaborate with stakeholders to understand business needs and provide data-backed solutions. Participate in research and development of new methods and tools for analyzing data. What you should have : Pursuing a degree in Data Science, Computer Science, Mathematics, or a related field and he/she should be available for 6 months in - person internship Experience with Python (including libraries like Pandas, Matplotlib, and Seaborn). Familiarity with machine learning algorithms and frameworks (e.g., Scikit-learn, TensorFlow, or PyTorch). Proficient in SQL and data extraction. Strong analytical and problem-solving skills. Ability to work in a fast-paced environment and manage multiple tasks. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud is a plus. What we offer you: A flat and transparent culture with an incredibly high learning curve and a swanky informal workspace which defines our open and vibrant work culture Opportunity to solve new and challenging problems with a high scope of innovation, complete ownership of the product, chance to conceptualize and implement your solutions Opportunity to work with incredible peers and be a part of the Tech revolution Most importantly, a chance to be associated with big impact early in your career

Posted 1 month ago

Apply

2.0 - 6.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Key Responsibilities Collect, clean, and preprocess large datasets from multiple sources. Apply statistical analysis and machine learning techniques to solve business problems. Develop, validate, and deploy predictive models and algorithms. Visualize data insights and communicate findings to technical and non-technical stakeholders. Collaborate with product, engineering, and business teams to implement data-driven solutions. Monitor model performance and improve models based on new data and feedback. Stay current with emerging data science techniques and tools. Required Qualifications Bachelors or Masters degree in Computer Science, Statistics, Mathematics, or related field. Strong programming skills in Python, R, or similar languages. Experience with machine learning libraries (scikit-learn, TensorFlow, PyTorch, etc.). Proficient in SQL and database management. Knowledge of data visualization tools (Tableau, Power BI, matplotlib, etc.). Strong understanding of statistics, probability, and algorithms. Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Develop automation utilities and scripts using Python to streamline workflows and processes. Perform data analysis to extract meaningful insights from structured and unstructured datasets. Create data views and dashboards based on analysis results to support decision-making. Design and implement visualizations using libraries like Matplotlib, Seaborn, or Plotly. Collaborate with cross-functional teams to gather requirements and deliver tailored solutions. Experience with frameworks like Flask or Django for building web-based utilities. Ensure code quality through unit testing, integration testing, and adherence to best practices. Document technical designs, processes, and solutions for future reference. Requirements To be successful in this role, you should meet the following requirements: Proficiency in Python programming with experience in developing scalable utilities and automation scripts. Strong knowledge of data analysis techniques and tools (e.g., Pandas, NumPy). Experience with data visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Knowledge of REST APIs and integration with external systems. Understanding of software development lifecycle (SDLC) and Agile methodologies. Strong verbal and written communication skills. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 month ago

Apply

35.0 years

0 Lacs

Greater Kolkata Area

On-site

Key Responsibilities Team Leadership : Lead, guide, and mentor a team of AI/ML engineers and data scientists, fostering a collaborative, innovative, and productive work environment. Clean Code & Development : Write clean, maintainable, and error-free Python code for developing robust machine learning systems. Model Engineering : Efficiently design, build, and deploy machine learning models and end-to-end solutions. Cross-Functional Collaboration : Work closely with diverse technical teams to ensure timely and efficient delivery of AI/ML projects. Data Science Lifecycle : Implement end-to-end data science processes, including data preprocessing, exploratory analysis, modeling, evaluation, and deployment. Database Management : Work proficiently with SQL and NoSQL databases to manage and retrieve data effectively. API Development : Develop and integrate RESTful APIs using Python web frameworks (e.g., Flask, Django). Debugging & Optimization : Diagnose issues and optimize existing code for performance improvements. Agile Methodologies : Follow Agile/Scrum software development lifecycle practices consistently. LLM & RAG Implementation : Design, develop, fine-tune, and deploy AI models focusing on Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). Model Validation & Evaluation : Conduct training, validation, and evaluation experiments, refining models based on statistical analysis and performance metrics. Security & Compliance : Ensure that AI/ML solutions adhere to security best practices and data privacy regulations (e.g., GDPR). Technical Qualifications Python Expertise : 35 years of hands-on experience with Python programming. ML/DL Experience : 35 years of experience building, training, and deploying Machine Learning and Deep Learning models. Deep Learning Frameworks : Proficiency with TensorFlow, PyTorch, and Keras. Web Frameworks : Solid experience with Python web frameworks (Flask, Django) for API development. Core Libraries : Strong expertise in NumPy, Pandas, Matplotlib, Scikit-learn, SpaCy, NLTK, and HuggingFace Transformers. Algorithms & Techniques : Deep understanding of ML/DL algorithms, with advanced techniques in NLP and/or Computer Vision. Production-Grade AI : Proven experience designing and deploying AI solutions leveraging LLMs and RAG. NLP & Conversational AI : Hands-on expertise in text representation, semantic extraction, sentiment analysis, and conversational AI. Concurrency : Practical knowledge of multithreading, concurrency, and asynchronous programming in Python. Database Proficiency : Advanced skills in SQL (PostgreSQL, MySQL, SQLite, MariaDB) and NoSQL (MongoDB) databases. Deployment & MLOps : Experience with containerization (Docker/Kubernetes), cloud platforms (AWS, Azure, GCP), and familiarity with MLOps principles for CI/CD, monitoring, and logging. Version Control : Proficient understanding of Git and Bitbucket for software version control. C++ : Experience with C++ development is a plus. Soft Skills Analytical Thinking : Strong analytical and structured problem-solving capabilities. Proactive Problem-Solving : Logical reasoning and a hands-on approach to debugging and optimization. Team Collaboration : Effective interpersonal skills to work seamlessly with cross-functional teams. Clear Communication : Excellent ability to articulate technical concepts to diverse audiences. F- Continuous Learning : Commitment to staying current on emerging trends in AI/ML, NLP, and Computer Vision. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders. Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business needs Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description In This Role, Your Responsibilities Will Be: Collaborate with cross-functional teams to identify opportunities to apply data-driven insights and develop innovative solutions to complex business problems. Develop, implement, and maintain SQL data pipelines and ETL processes to collect, clean, and curate large and diverse datasets from various sources. Design, build, and deploy predictive and prescriptive machine learning models, generative AI model prompt engineering to help the organization make better data-driven decisions. Perform exploratory data analysis, feature engineering, and data visualization to gain insights and identify potential areas for improvement. Optimize machine learning models and algorithms to ensure scalability, accuracy, and performance while minimizing computational costs. Continuously monitor and evaluate the performance of deployed models, updating or refining them as needed. Stay abreast of the latest developments in data science, machine learning, and big data technologies to drive innovation and maintain a competitive advantage. Develop and implement best practices in data management, data modeling, code, and data quality assurance. Communicate effectively with team members, stakeholders, and senior management to translate data insights into actionable strategies and recommendations. Who You are: You take initiatives and doesn’t wait for instructions and proactively seek opportunities to contribute. You adapt quickly to new situations and apply knowledge optimally. Clearly convey ideas and actively listen to others to complete assigned task as planned For This Role, You Will Need: Bachelor’s degree in computer science, Data Science, Statistics, or a related field or a master's degree or higher is preferred. 3-5 years of experience with popular data science libraries and frameworks such as scikit-learn, SQL, SciPy, TensorFlow, PyTorch, NumPy and Pandas. Minimum 2 years of experience in data science projects leveraging machine learning, deep learning, transformer based large language models or any of other cutting edge AI technologies. Strong programming skills in Python is a must. Solid understanding of calculus, linear algebra, probability, machine learning algorithms, Transformer architecture-based model and data modeling techniques. Proficiency in data visualization tools, such as Matplotlib or Seaborn or Bokeh or Dash. Strong problem-solving and analytical skills with an ability to synthesize complex data sets into actionable insights. Excellent written and verbal communication skills, with the ability to present technical concepts to non-technical audiences. Preferred Qualifications that Set You Apart: Possession of relevant certification/s in data science from reputed universities specializing in AI. Prior experience in engineering would be nice to have. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Introduction We are looking for a Data Analyst Intern to join our team and assist with the collection and conversion of raw data into valuable insights, with a strong focus on carbon credit markets and renewable energy. The intern will support business development efforts by analyzing carbon credit data, identifying buyers, and researching new technologies for carbon credit project development. The ideal candidate will have strong analytical skills, advanced proficiency in Excel, and experience in tools like Python, SQL, and Power BI. A B.Tech (preferred) or a related degree and a keen interest in climate markets are essential for this role. The role involves close collaboration with the business development and marketing teams. About Us Sustainiam is a climate-tech startup founded in 2023, focused on providing innovative solutions for renewable energy and carbon markets. Our advanced platforms, including CiP (Certificate Issuance Platform), EmX (Emission Xchange), and ECal (Emission Calculator), are designed to streamline carbon credit issuance, trading, and emissions tracking. Backed by leading accelerators like IKEA and Visa, Sustainiam has partnered with 100+ global companies, including Coca-Cola and Unilever, to drive meaningful climate action. We are at the forefront of creating technologies that empower businesses to achieve their sustainability goals and contribute to a net-zero future. Your Responsibilities Data Collection & Conversion: Manually collect and convert raw data from various sources into structured formats for business analysis, focusing on carbon credit data. Excel Reporting: Organize and maintain data in Excel, ensuring accuracy, consistency, and reliability for the business and marketing teams. Create advanced reports using pivot tables, formulas, and macros. Carbon Credit Market Analysis: Conduct market analysis on carbon credit retirement activities across registries like Verra and Gold Standard. Extract key insights to help the business development team understand the list of carbon credit buyers and market trends. Business Leads Identification: Convert data into actionable insights and business leads by analyzing trends in the carbon credit market, focusing on potential buyers and sellers. Research on Renewable Energy Technologies: Investigate and report on new technologies used in carbon credit project development, particularly in the renewable energy sector. Collaboration with Renewable Energy Project Developers: Reach out to Renewable Energy (RE) project developers to gather information on potential projects and carbon credit generation. Assist in building relationships with RE project developers and other stakeholders in the carbon credit and renewable energy sectors. Data Cleansing & Reporting: Review, clean, and validate data to ensure it meets required quality standards. Business and Marketing Support: Work closely with the business team to convert raw data into actionable insights and provide valuable marketing support to drive strategies. Support Business Decisions: Provide insights from data that will help the business team make data-driven decisions related to carbon credit markets and renewable energy initiatives. Skills & Experience We Require Technical Expertise: Excel: Strong knowledge of Excel, including advanced functions, formulas, pivot tables, and data manipulation. Python: Experience with data analysis libraries like Pandas, NumPy, and visualization tools like Matplotlib or Seaborn. SQL: Proficiency in querying, managing, and analyzing large datasets from databases. Power BI: Familiarity with creating interactive dashboards and reports for data visualization. Conda/Pandas: Knowledge of Python environment management using Conda and experience with Pandas for data cleaning and manipulation. Data Handling: Experience with data collection, conversion, and analysis. Ability to manually collect and structure data from various sources, particularly related to carbon credits and renewable energy. Market Analysis: Understanding of carbon credit markets and the ability to analyze data from registries like Verra and Gold Standard. Ability to identify buyers in the carbon credit market and assist the business development team in reaching out to potential clients. Knowledge of Renewable Energy Technologies: Interest or experience in emerging technologies in carbon credit project development and renewable energy. Business Insight: Ability to identify business leads from raw data and help the business team leverage data to optimize marketing and business strategies. Attention to Detail: Strong attention to detail with the ability to identify inconsistencies and inaccuracies in data. Communication Skills: Good written and verbal communication skills to clearly present data and findings to the business team. Education B.Tech (Bachelor of Technology) in Engineering, Computer Science, or a related technical field. B.Com (Bachelor of Commerce) or a related degree in business, economics, or a similar field. Preferred Qualifications (Optional But a Plus) Interest in Renewable Energy: Experience or interest in renewable energy and carbon credit projects. Carbon Credit Market Knowledge: Familiarity with carbon credit registries (e.g., Verra, Gold Standard) and retirement processes. Climate Markets Understanding: Basic understanding of climate markets and their data analysis.

Posted 1 month ago

Apply

4.0 years

22 - 25 Lacs

Hyderābād

On-site

We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms. Experience Required: 5+ Years Interview Process: Must Have : Kubernetes experience -2 years Python (Django/Flask) Automation & Scripting Data Manipulation (Pandas, NumPy, SQL/NoSQL) Data Visualization (Matplotlib, Seaborn, Plotly) Position Overview: We are seeking a skilled Python Developer with Automation expertise to join our team in Chennai. The ideal candidate will have a strong background in Python development, experience in automation, and proficiency in working with data manipulation and visualization tools. This role requires collaboration with cross-functional teams to develop efficient and scalable solutions. Key Responsibilities: Develop and maintain applications using Python frameworks like Django or Flask. Design and implement automation solutions to streamline processes and enhance system efficiency. Work with Pandas, NumPy, and SQL/NoSQL databases to manage and manipulate large datasets. Create data visualization dashboards using tools such as Matplotlib, Seaborn, or Plotly. Collaborate with stakeholders to understand requirements and deliver high-quality solutions. Ensure best coding practices, maintain documentation, and optimize performance. Required Qualifications & Skills: 5+years of experience in Python development with a strong understanding of Django or Flask frameworks. Hands-on experience in automation, including scripting and process automation. Strong knowledge of data handling and manipulation using Pandas, NumPy, SQL/NoSQL databases. Experience with data visualization tools such as Matplotlib, Seaborn, or Plotly. Ability to work onsite/hybrid in Chennai. Preferred Qualifications: Experience working in a consulting or enterprise environment. Exposure to cloud platforms (AWS, Azure, or GCP). Knowledge of DevOps tools and CI/CD pipelines. Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Schedule: Day shift Fixed shift Rotational shift Weekend availability Work Location: In person

Posted 1 month ago

Apply

2.0 years

9 - 20 Lacs

Mohali

On-site

Experience Required: 2-5Years No. of vacancies: 4 Job Type: Full Time Vacancy Role: WFO Job Category: Development Job Description We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus. Roles & Responsibilities Analyze and process large datasets using Python and Pandas. Develop and optimize machine learning models for predictive analytics. Create data visualizations using Matplotlib and Seaborn to support decision-making. Perform data cleaning, feature engineering, and statistical analysis. Work with structured and unstructured data to extract meaningful insights. Implement and fine-tune NER models for specific use cases (if required). Collaborate with cross-functional teams to drive data-driven solutions. Qualifications 2+ years of professional experience Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.). Experience in data analysis, statistical modeling, and machine learning. Hands-on expertise in data visualization using Matplotlib and Seaborn. Understanding of SQL and database querying. Familiarity with NLP techniques and NER models is a plus. Strong problem-solving and analytical skill Job Type: Full-time Pay: ₹900,000.00 - ₹2,000,000.00 per year Benefits: Flexible schedule Leave encashment Provident Fund Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

2.0 years

2 - 4 Lacs

Bengaluru

On-site

ML Intern Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world. JOB OVERVIEW We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms at the intersection of energy and climate change. The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline. RESPONSIBILITIES: Extract, clean, and transform data from various sources using OCR or Deep Learning. Develop and implement web scraping scripts using Python libraries like BeautifulSoup and Selenium. Build and maintain data pipelines for efficient data ingestion and processing. Perform data analysis using libraries like Pandas, NumPy, and Matplotlib to identify trends, patterns, and insights. Implement data pipelines and novel machine learning algorithms for processing satellite imagery, geophysical data, and other relevant datasets using Python, PyTorch and TensorFlow. Optimize and evaluate ML models to ensure accuracy and performance. Define system requirements and integrate ML algorithms into cloud based workflows. Write clean, well-documented, and maintainable code following best practices Stay up-to-date with advancements in data science and geology, carbon sequestration and resource mapping methods. REQUIREMENTS: 2+ years of experience in data science, machine learning, or a similar role. Demonstrated expertise with python, PyTorch, and TensorFlow. Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Geology/Physics Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning. Preference will be given to candidates with background in geology, petrology, research background. WHO CAN APPLY: Only those candidates will be considered who, have relevant skills and interests can commit full time Can show prior work and deployed projects can start immediately Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above. SALARY DETAILS: Commensurate with experience. JOINING DATE: Immediate Job Type: Full-time Pay: ₹20,000.00 - ₹40,000.00 per month Benefits: Flexible schedule Schedule: Day shift Evening shift Night shift Supplemental Pay: Performance bonus Application Question(s): Can you join full time ? Have you studied (or studying) at IITs/NITs/BITS/ ISM Dhanbad ? Education: Bachelor's (Required) Experience: Machine learning: 4 years (Required) Application Deadline: 15/07/2025 Expected Start Date: 21/07/2025

Posted 1 month ago

Apply

0 years

1 - 4 Lacs

Coimbatore

On-site

Data Science Intern Join us as a Data Science Intern and work on meaningful projects that turn data into actionable insights. You’ll collaborate with experienced data scientists and analysts to explore data, build dashboards, and apply statistical models to solve business problems. Key Responsibilities: Collect, clean, and preprocess structured and unstructured data. Perform exploratory data analysis (EDA) to uncover trends and patterns. Develop data visualizations using tools like Matplotlib, Seaborn, or Power BI. Build and validate statistical or predictive models using Python/R. Translate data findings into business recommendations. Work with large datasets using SQL and cloud-based platforms. Document workflows and present outcomes to stakeholders. Preferred Skills: Strong proficiency in Python and SQL Familiarity with libraries such as Pandas, NumPy, Scikit-learn, and Statsmodels Basic understanding of statistics, probability, and hypothesis testing Experience with data visualization tools (e.g., Tableau, Power BI, Plotly) Knowledge of version control systems like Git Benefits: Real-world data projects with measurable business value Guided mentorship and regular feedback sessions Exposure to analytics tools and industry practices Internship certificate and performance-based recommendation letter Possibility of pre-placement offer (PPO) Job Types: Part-time, Fresher, Internship Contract length: 3 months Pay: ₹12,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Schedule: Day shift Monday to Friday Morning shift Rotational shift Weekend availability Supplemental Pay: Performance bonus Work Location: In person

Posted 1 month ago

Apply

1.0 - 2.0 years

3 - 4 Lacs

Jaipur

On-site

JOB SUMMARY We are looking for a passionate and skilled PYTHON DEVELOPER with experience in AI/ML to join our growing engineering team. The candidate is required to look on building intelligent solutions, developing data pipelines and deploying machine learning models that power real-world applications. KEY RESPONSIBILITIES ◾Design, develop, and optimize Python-based backend systems and ML workflows. ◾Build and train machine learning models using libraries like Scikit-learn, TensorFlow, or PyTorch. ◾Work on data ingestion, cleaning, preprocessing, and model evaluation. ◾Implement RESTful APIs to serve AI/ML model outputs. ◾Collaborate with data scientists, frontend developers, and DevOps for model deployment. ◾Write clean, efficient, and scalable code with proper documentation. ◾Continuously improve system performance and accuracy via retraining, tuning, and feedback loops. REQUIRED SKILLS & QUALIFICATIONS ◾Bachelor’s or master’s in computer science, Engineering, or related field. ◾1-2 years of hands-on experience in Python development.Solid understanding of machine learning algorithms and data structures. ◾Experience with ML libraries like: ◾Knowledge of model deployment (Flask/Fast API, Docker preferred). ◾Experience working with version control (Git) and cloud services (AWS/GCP/Azure) is a plus. ◾Strong problem-solving and communication skills. ADDITIONAL PREFERRED SKILLS ✔ Experience with ML Ops or AutoML pipelines. ✔ Familiarity with NLP, computer vision, or time-series forecasting. ✔ Knowledge of SQL/NoSQL databases and data warehousing tools. ✔ Exposure to data visualization tools (e.g., Matplotlib, Seaborn, Plotly). Job Type: Full-time Pay: ₹25,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Schedule: Day shift Fixed shift Monday to Friday Application Question(s): Immediate Joiner Work Location: In person

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies