Jobs
Interviews

715 Plotly Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

0 Lacs

jodhpur, rajasthan

On-site

As a Full Stack Developer AI/ML at our company located in Jodhpur, you will have the exciting opportunity to work on cutting-edge AI-powered applications, develop and deploy ML models, and integrate them into full-stack web solutions. Here's what you can expect in this role: Key Responsibilities: - Design, develop, and deploy AI/ML models for real-world applications. - Work with Python, TensorFlow, PyTorch, Scikit-Learn, and OpenCV for machine learning tasks. - Develop RESTful APIs to integrate AI models with web applications. - Implement and maintain backend services using Node.js, FastAPI, or Flask. - Build and enhance frontend UIs using React.js, Vue.js, or Angular. - Manage databases (SQL & NoSQL like PostgreSQL, MongoDB, Firebase) for storing model outputs. - Optimize AI models for performance, scalability, and deployment using Docker, Kubernetes, or cloud services (AWS, GCP, Azure). - Work on data preprocessing, feature engineering, and model evaluation. - Deploy models using MLOps pipelines (CI/CD, model versioning, monitoring). - Collaborate with cross-functional teams for AI integration into business applications. Required Skills & Qualifications: - 0-1 year of experience in AI/ML and full-stack development. - Strong proficiency in Python, TensorFlow, PyTorch, Scikit-Learn, Pandas, NumPy. - Experience with backend development (Node.js, FastAPI, Flask, or Django). - Knowledge of frontend frameworks (React.js, Vue.js, or Angular). - Familiarity with database management (SQL - PostgreSQL, MySQL | NoSQL - MongoDB, Firebase). - Understanding of machine learning workflows, model training, and deployment. - Experience with cloud services (AWS SageMaker, Google AI, Azure ML). - Knowledge of Docker, Kubernetes, and CI/CD pipelines for ML models. - Basic understanding of Natural Language Processing (NLP), Computer Vision (CV), and Deep Learning. - Familiarity with data visualization tools (Matplotlib, Seaborn, Plotly, D3.js). - Strong problem-solving and analytical skills. - Good communication and teamwork skills. Preferred Qualifications: - Experience with LLMs (GPT, BERT, Llama) and AI APIs (OpenAI, Hugging Face). - Knowledge of MLOps, model monitoring, and retraining pipelines. - Understanding of Reinforcement Learning (RL) and Generative AI. - Familiarity with Edge AI and IoT integrations. - Previous internship or project experience in AI/ML applications. In this role, you will receive a competitive salary package, hands-on experience with AI-driven projects, mentorship from experienced AI engineers, a flexible work environment (Remote/Hybrid), and career growth and learning opportunities in AI/ML.,

Posted 2 days ago

Apply

4.0 - 6.0 years

0 Lacs

pune, maharashtra, india

On-site

Role Summary We are seeking a talented Data Scientist to join our growing team and play a crucial role in developing cutting-edge ESG analytics solutions. The successful candidate will work with diverse ESG datasets to create models, insights, and tools that help our clients make informed sustainability decisions and meet regulatory compliance requirements. Key ResponsibilitiesData Analysis & Modeling Develop and implement machine learning models for ESG risk assessment, sustainability scoring, and impact measurement Analyze complex datasets including carbon emissions, supply chain data, social impact metrics, and governance indicators Create predictive models for climate risk scenarios, regulatory compliance, and sustainability performance forecasting Design and build ESG rating methodologies and scoring algorithms ESG-Specific Analytics Work with environmental data (carbon footprint, energy consumption, waste management, water usage) Analyze social metrics (diversity & inclusion, labor practices, community impact, human rights) Evaluate governance factors (board composition, executive compensation, transparency, ethics) Develop sector-specific ESG benchmarks and peer analysis frameworks Data Engineering & Management Design and maintain robust data pipelines for ESG data ingestion from multiple sources Ensure data quality, consistency, and accuracy across various ESG databases Work with alternative data sources including satellite imagery, news sentiment, and regulatory filings Implement data governance frameworks compliant with Indian and international standards Visualization & Reporting Create compelling data visualizations and interactive dashboards for ESG metrics Develop automated reporting solutions for regulatory compliance (BRSR, TCFD, GRI standards) Build client-facing analytics tools and ESG performance monitoring systems Present complex analytical findings to both technical and non-technical stakeholders Collaboration & Innovation Collaborate with sustainability consultants, policy experts, and domain specialists Stay updated with evolving ESG regulations, frameworks, and industry best practices Contribute to research and development of new ESG measurement methodologies Support business development activities with technical expertise and proof-of-concepts Required QualificationsEducation & Experience Master's degree in Data Science, Statistics, Computer Science, Environmental Science, or related field 4-6 years of experience in data science, analytics, or quantitative research Prior experience in ESG, sustainability, climate risk, or related domains preferred Experience with financial services, consulting, or regulatory compliance is a plus Technical Skills Programming : Proficiency in Python/R, SQL, and statistical analysis libraries (pandas, scikit-learn, numpy) Machine Learning : Experience with supervised/unsupervised learning, time series analysis, and ensemble methods Data Visualization : Expertise in Tableau, Power BI, Plotly, or similar tools Cloud Platforms : Experience with AWS, Azure, or GCP for data processing and model deployment Databases : Knowledge of SQL and NoSQL databases, data warehousing concepts Big Data : Familiarity with Spark, Hadoop, or similar big data technologies ESG Knowledge Understanding of ESG frameworks (GRI, SASB, TCFD, CDP, UN Global Compact) Knowledge of Indian ESG regulations and reporting requirements (BRSR, SEBI guidelines) Familiarity with climate science, carbon accounting, and sustainability metrics Awareness of international ESG standards and rating methodologies Soft Skills Strong analytical thinking and problem-solving abilities Excellent communication skills in English and Hindi Ability to work in cross-functional teams and manage multiple projects Detail-oriented with strong quality assurance mindset Passion for sustainability and positive environmental/social impact

Posted 2 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Who Are We Raptee.HV is a full- stack electric motorcycle startup with a very strong technical moat, founded in 2019 by four engineers from Chennai (Ex Tesla, Wipro), working on bringing a no-compromise upgrade motorcycle to an otherwise scooter- only EV market. Raptee is incubated at CIIC & ARAI. Role Overview We are seeking a highly motivated and talented Data Engineer Intern to play a pivotal role in establishing our data infrastructure. This is a unique greenfield opportunity to build our data practice from the ground up. The ideal candidate is a proactive problem-solver, passionate about transforming raw data into actionable insights, and excited by the challenge of working with complex datasets from IoT devices and user applications in the electric vehicle (EV) domain. You will be instrumental in creating the systems that turn data into intelligence. What You’ll Do ETL Pipeline Development: Design, build, and maintain foundational ETL (Extract, Transform, Load) processes to ingest and normalize data from diverse sources, including vehicle sensors, user applications, JSON/CSV files, and external APIs. Data Analysis & Trend Discovery: Perform exploratory data analysis to uncover trends, patterns, and anomalies in large datasets. Your insights will directly influence product strategy and development. Insight Visualization: Develop and manage interactive dashboards, reports, and data visualization applications to communicate key findings and performance metrics to both technical and non-technical stakeholders. Data Infrastructure: Assist in setting up and managing scalable data storage solutions, ensuring data integrity, security, and accessibility for analysis. Cross-Functional Collaboration: Work closely with engineering and product teams to understand data requirements and help integrate data-driven decision-making into all aspects of our operations. Who Can Apply? Strong proficiency in Python and its core data manipulation libraries (e.g., Pandas, NumPy). Solid understanding of SQL for querying and managing relational databases. Demonstrable experience in parsing and handling common data formats like JSON and CSV. Excellent analytical and problem-solving skills with a meticulous attention to detail. Currently pursuing or recently completed a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, Statistics, or a related quantitative field. Preferred Qualifications (What Will Set You Apart) Hands-on experience with at least one major cloud platform (e.g., AWS, Google Cloud Platform). Familiarity with data visualization libraries (e.g., Matplotlib, Seaborn, Plotly) or business intelligence tools (e.g., Tableau, Power BI). Basic understanding of machine learning concepts. Previous experience working with REST APIs to retrieve data. A genuine interest in the electric vehicle (EV) industry, IoT data, or sustainable technology. What’s In It For You Invaluable hands-on experience building a data ecosystem from scratch in a high-growth, cutting-edge industry. The opportunity to apply your skills to real-world challenges by working with large-scale, complex datasets. A chance to develop a versatile skill set across the entire data lifecycle, from engineering and pipeline development to advanced analysis and visualization. The ability to make a direct and tangible impact on a product that is shaping the future of mobility.

Posted 3 days ago

Apply

10.0 years

0 Lacs

pune, maharashtra, india

On-site

Lead Simulation Engineer – Digital Twin Simulation & Network Optimization This position will be based in India – Bangalore/Pune Job Summary We are looking for a Lead Simulation Engineer to spearhead our efforts in Digital Twin Simulation and Network Optimization . In this senior role, you will architect, design, and oversee the development of advanced simulation and optimization models to enhance global logistics, warehouse operations, and supply chain network efficiency. You will lead a team of engineers, mentor junior members, and collaborate cross-functionally with product, data, and operations teams to deliver scalable, high-impact solutions. The ideal candidate combines deep technical expertise in simulation and optimization with leadership skills and a strategic mindset to drive innovation and operational excellence. Key Responsibilities Technical Leadership: Define the vision, architecture, and roadmap for simulation and optimization initiatives, ensuring scalability and alignment with business goals. Model Development: Lead the design and implementation of digital twin simulations for warehouse, logistics, and supply chain systems. Optimization Strategy: Drive the creation of advanced network optimization models to improve flows, resource allocation, and performance at scale. Collaboration & Integration: Partner with business stakeholders, data engineers, and operations teams to embed simulation-driven insights into decision-making and daily workflows. Mentorship & Guidance: Coach, mentor, and upskill junior simulation engineers, fostering technical excellence and innovation. Quality & Delivery: Establish best practices for coding, testing, and documentation to ensure robust and maintainable solutions. Insights & Communication: Present findings, optimizations, and recommendations to senior leadership in both technical and business terms. Innovation: Explore and evaluate emerging techniques (AI/ML-enhanced simulations, real-time twins, advanced heuristics) to push boundaries in simulation-driven decision support. Must-Have Technical Skills Python Expertise: Advanced proficiency in OOP, algorithms, data structures; writing scalable, modular, testable code. Data Handling & Computation: pandas, NumPy for large-scale data analysis and numerical simulations. Data Integration: Proficient in REST APIs (requests) and SQL for data ingestion. Discrete-Event Simulation: Expertise in DES principles with SimPy, modeling large, complex systems (warehousing, logistics, supply chains). Optimization & Operations Research: Strong background in LP/MIP, solving large-scale VRP/assignment problems with OR-Tools, Pyomo, or PuLP. Graph Analytics: Advanced use of NetworkX for analyzing supply chain topologies and flows. Software Engineering Practices: Git, CI/CD pipelines, Docker, and cloud-native development. API Development: Designing and deploying REST services (Flask, FastAPI) for scalable simulation integration. Visualization: Building dashboards/plots with Matplotlib, Seaborn, or Plotly to communicate insights effectively. Nice-to-Have Technical Skills Advanced Optimization: Commercial solvers (Gurobi, CPLEX) with Python APIs; heuristics and meta-heuristics for VRP and complex routing. ML-Enhanced Simulations: Using scikit-learn, TensorFlow, or PyTorch for predictive and adaptive modeling. Alternative Simulation Paradigms: Agent-based or hybrid modeling approaches. Streaming & IoT: Real-time data processing with Kafka, MQTT for live digital-twin updates. Geospatial Analytics: Tools like GeoPandas, Shapely, OSRM, Google Maps APIs for route optimization and visualization. Interactive Dashboards: Dash or Streamlit for simulation/optimization interfaces. 3D Visualization: PyVista, Vedo, or similar libraries for advanced digital-twin visualizations. Who You Are 10+ years of experience in simulation, optimization, or applied operations research, with at least 3+ years in a lead or managerial role. Proven success in leading cross-functional simulation/optimization projects that directly impacted operations or logistics. Strong problem-solving, analytical, and decision-making skills with the ability to balance technical depth and business impact. Excellent communication and leadership skills, with the ability to influence both technical teams and business stakeholders. Passionate about innovation, continuous improvement, and building next-generation digital twins for global-scale systems. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 3 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Improve and refactor existing codebases for performance, readability, and reusability. Handle large datasets and perform data aggregation, filtering, and transformation using Pandas and NumPy. Generate basic data visualizations using Matplotlib, Seaborn, or Plotly to support reporting and analysis. Interact with various data sources (CSV, Excel, REST APIs, JSON, Databases). Write clear, modular, and testable code with unit tests using pytest or unittest etc., Collaborate with cross-functional teams to understand business logic and automate manual tasks. Design and maintain CI/CD pipelines using Jenkins for Python-based tools and automations. Automate testing, code validation, and deployment processes by Integrating Jenkins with Git repositories Requirements To be successful in this role, you should meet the following requirements: Full stack Development with Python Agile methodology and DevOps CI/CD Development Environments/Tools: GIT, Maven, Jenkins, Ansible. CI/CD Project Management Tools: JIRA, Confluence Restful API Development Analytical skills, Communication Skills Cloud infrastructure development (good to have) Understanding of Monitoring and Alerting Domain (good to have) You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 3 days ago

Apply

7.0 - 10.0 years

0 Lacs

nashik, maharashtra, india

On-site

Job Title: Python Developer Job Summary: We are looking for a skilled Python Developer with 7 to 10 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation.

Posted 3 days ago

Apply

0 years

0 Lacs

serilingampalli, telangana, india

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. At our core, we are dedicated to enriching lives by bridging the gap between individuals and premium wireless experiences that not only meet but exceed expectations in value and quality. We believe that everyone deserves access to seamless, reliable, and affordable wireless solutions that enhance their day-to-day lives, connecting them to what matters most. By joining our team, you'll play a pivotal role in this mission, working towards delivering innovative, customer-focused solutions that open up a world of possibilities. We're not just in the business of technology; we're in the business of connecting people, empowering them to explore, share, and engage with the world around them in ways they never thought possible. Building on our commitment to connect people with quality experiences that offer the best value in wireless, let's delve deeper into how we strategically position our diverse portfolio to cater to a broad spectrum of needs and preferences. Our portfolio, comprising 11 distinct brands, is meticulously organized into five families, each designed to address specific market segments and distribution channels to maximize reach and impact. Total by Verizon & Verizon Prepaid: At the forefront, we have Total by Verizon and Verizon Prepaid, our flagship brands available at Verizon exclusive and/or national/retail stores. Verizon Prepaid continues to maintain a robust and loyal consumer base, while Total by Verizon is on a rapid ascent, capturing the hearts of more customers with its compelling offerings. Straight Talk, TracFone, and Walmart Family Mobile: Straight Talk, Tracfone, and Walmart Family Mobile stand as giants in our brand portfolio, boasting significant presence in Walmart. Their extensive reach and solidified position in the market underscore our commitment to accessible, high-quality wireless solutions across diverse retail environments. Visible: Visible, as a standalone brand family, caters to the digitally-savvy, single-line customers who prefer streamlined, online-first interactions. This brand is a testament to our adaptability, embracing the digital evolution of customer engagement. Simple Mobile: Carving out a niche of its own, Simple Mobile shines as the premier choice among authorized resellers. Its consistent recognition as the most carried brand in Wave7 Research’s prepaid dealer survey for 36 consecutive quarters speaks volumes about its popularity and reliability. SafeLink: SafeLink remains dedicated to serving customers through government subsidies. With a strategic pivot towards Lifeline in the absence of ACP, SafeLink continues to fulfill its mission of providing essential communication services to those in need. Join the team that connects people with quality experiences that give them the best value in wireless. Data Science What You’ll Be Doing: Knowledge of experimental design and testing frameworks like A/B testing, Multi-armed Bandit testing etc. to guide optimal decisions across various key base-management initiatives related to churn reduction, revenue growth and customer satisfaction. Build frame-works to enable effective measurement of experiments at scale. Research and build proof-of-concept models utilizing various machine learning techniques & traditional statistical techniques to drive optimal business decisions for various use cases like call-routing, revenue growth through initiatives like driving Walmart+ and other subscriptions. Building Financial models leveraging Survival modeling and other machine learning techniques to drive effective decisions across all base-management initiatives Developing Customer Segmentation Models and response rate models to help in effective targeting of customers across different base-management initiatives. KPI forecasting leveraging time-series forecasting techniques to guide the business on metrics like churn etc. Generating business insights by using various predictive modeling techniques. Analytics & Data Enablement Synthesize predictive insights to concrete business actions and collaborate with teams across the value organization (Finance / Marketing / Operations / GTS / AI&D etc.) to drive successful business outcomes. Leverage understanding of customer behaviors, strategic thinking, strong financial acumen and cross-functional aptitude to build analytical solutions to make the best-informed decisions. Work on building either cloud-based predictive modeling solutions (AWS/GCP using Pyspark or similar tools) or general predictive solutions depending on the use case. Guide the build and enhancement of the Customer Data Platform with 360-degree view of customer and prospect profiles, interactions & demographics. Enable team with Analytical solutions to evaluate marketing campaign’s effectiveness and identify areas for optimization. Measure key performance indicators (KPIs) such as conversion rates, return on investment (ROI), and customer engagement metrics. Provide insights to refine targeting strategies, messaging, and channel allocation for future campaigns. What We’re Looking For You’ll need to have: Bachelor’s degree or four or more years of work experience. Six or more years of relevant work experience Predictive / Prescriptive modeling experience. Experience in SQL, Python (/R) and data visualization tools (Tableau, Qlik or native Python or other libraries like Plotly/Streamlit etc). Experience with Microsoft Excel and PowerPoint. Experience managing large complex projects involving multiple work streams and stakeholders. Strong storytelling to drive recommendations based on the data analysis and articulate tradeoffs and communication plan to senior executives. Experience gathering, organizing and analyzing large amounts of information. Experience presenting ideas and content to a variety of stakeholders at various levels. Experience working collaboratively with a variety of stakeholders. Even better if you have any one of the following: Ability to translate complex ideas and express them in concise, simple to understand ways. Ability to take initiative, influence others and achieve results. Advanced research, analytical, and critical thinking skills with the ability to see things not readily apparent to others and to find unique solutions to complex challenges. Ability to present and interact with all levels of management. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 3 days ago

Apply

0 years

0 Lacs

india

Remote

Role: Data Scientists / Data Visualization Experts (Python+ML) Location : Remote Experience : 7-10 Yrs Notice Period : Immediate to 15 Days About the Role: The project involves creating a dataset of visualizations for LLM training. Tasks include creating visualizations, replicating dashboards from existing images, insight evaluation, and prompt creation. we would be needing resources who are at lead level and are capable of writing production level code.Required Skills: · Python (pandas, matplotlib, plotly, bokeh, dash) · Data wrangling · Data visualization · Data engineering · Analytical reasoning

Posted 3 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

We’re looking for a Software Engineer with a strong interest in D ata Engineering and applied AI to join our growing technology team. This is an opportunity to work at the intersection of software development, machine learning, and financial data—delivering real-world impact from day one. You’ll develop tools, infrastructure, and applications that transform raw data into actionable insight. You'll collaborate closely with analysts, quants, and operations teams to build production-ready systems that drive decision-making across the firm. Roles & Responsibilities: Design and build robust, maintainable software systems to support data pipelines, analytics, and AI applications. Collaborate with domain experts to scope, build, and iterate on AI-driven tools tailored to business needs. Develop efficient data ingestion, transformation, and validation pipelines using Python, SQL, and modern libraries. Integrate large language models (LLMs) and generative AI APIs (e.g., OpenAI, HuggingFace) into internal tooling. Create clean, intuitive user interfaces and data visualizations with libraries like Plotly and Dash. Optimize database access and performance for large structured and semi-structured datasets. Ensure tools are scalable, secure, and aligned with end-user workflows and UX needs. Take projects from concept to production, including testing, documentation, deployment, and iteration. Qualifications Strong programming skills in Python , with exposure to software engineering best practices (version control, testing, modular code). Proficiency with SQL and relational databases. Experience working with data pipelines , ETL workflows , and data wrangling techniques. Familiarity with ML frameworks such as PyTorch, and data libraries like Pandas and Numpy. Exposure to generative AI or LLM APIs and frameworks (e.g., OpenAI, Cohere, HuggingFace Transformers, DSPy, LangChain). Understanding of fundamental machine learning and data science concepts. Strong written and verbal communication skills, especially in explaining complex technical topics to diverse stakeholders. User-focused mindset: you care about who’s using your tools and how to make them intuitive, reliable, and useful. Team-oriented: you work well with others, ask great questions, and contribute to a collaborative culture. Interested candidates can share your updated resume at vishnu.gadila@cesltd.com

Posted 3 days ago

Apply

6.0 years

0 Lacs

mumbai, maharashtra, india

On-site

At Align Labs, we build intelligent systems, AI-powered tools, and data platforms that help ventures scale faster, smarter, and more efficiently. From real-time analytics to generative AI agents , we operate at the frontier of applied AI—reimagining how products are designed, deployed, and adopted across industries. We’re looking for a Data Scientist who’s excited to work at the intersection of data, machine learning, and product development . If you thrive on experimentation, love turning messy data into insights, and want to see your models power real products in the wild, we’d love to meet you. What you'll do Build & Experiment : Develop machine learning models and data-driven features—recommendation systems, natural language processing (NLP), generative AI, and predictive analytics. Data Engineering & Analysis : Collect, clean, and analyze structured + unstructured data to uncover insights and support product decisions. Prototype to Production : Rapidly test ideas in notebooks, then work with engineers to productionize models using MLOps practices . Measure & Iterate : Design experiments (A/B tests, evaluation pipelines), define success metrics, and continuously improve model performance. Collaborate Cross-Functionally : Partner with product managers, designers, and engineers to ensure data and AI capabilities directly support real-world use cases. What we're Looking for 2–6 years of experience as a Data Scientist / ML Engineer (startup or product environment a plus). Strong programming skills in Python ; comfort with libraries like Pandas, NumPy, Scikit-learn, PyTorch, TensorFlow, or Hugging Face . Experience with data pipelines and storage (SQL, NoSQL, or vector databases such as Pinecone, Weaviate, or FAISS). Understanding of retrieval-augmented generation (RAG) and generative AI techniques is a plus. Familiarity with MLOps, model deployment, and monitoring (Docker, Kubernetes, cloud platforms). Strong analytical and communication skills—able to turn data into insights and insights into product decisions. Experience with LLM evaluation frameworks or prompt engineering. Familiarity with data visualization (e.g., Plotly, D3.js, Tableau). Experience working in early-stage startups or venture building environments . Why work with us? Build real-world AI products used across multiple ventures, not just prototypes. Work in a fast-moving, collaborative, high-trust environment . Collaborate directly with founders, engineers, and creators . Help shape the future of applied AI in India and beyond . Send your LinkedIn profile , portfolio (GitHub, Kaggle, or projects) , and a short note about the most exciting data problem you’ve solved to [build@alignlabs.xyz] .

Posted 4 days ago

Apply

8.0 years

2 - 9 Lacs

gurgaon

On-site

Project description We are seeking a highly experienced Data Scientist with deep expertise in Python and advanced machine learning techniques. You need to have a strong background in statistical analysis, big data platforms, and cloud integration, and you will be responsible for designing and deploying scalable data science solutions. Responsibilities Develop and deploy machine learning, deep learning, and predictive models. Perform statistical analysis, data mining, and feature engineering on large datasets. Build and optimize data pipelines and ETL workflows. Collaborate with data engineers and business stakeholders to deliver actionable insights. Create compelling data visualizations using tools like Tableau, Power BI, Matplotlib, or Plotly. Implement MLOps practices, including CI/CD, model monitoring, and lifecycle management. Mentor junior data scientists and contribute to team knowledge-sharing. Stay current with trends in AI/ML and data science. Skills Must have Minimum 8+ years of hands-on experience in Data Science with strong expertise in Python and libraries such as Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, or PyTorch. Proven ability to design, develop, and deploy machine learning, deep learning, and predictive models to solve complex business problems. Strong background in statistical analysis, data mining, and feature engineering for large-scale structured and unstructured datasets. Experience working with big data platforms (Spark, Hadoop) and integrating with cloud environments (AWS, Azure, GCP). Proficiency in building data pipelines, ETL workflows, and collaborating with data engineers for scalable data solutions. Expertise in data visualization and storytelling using Tableau, Power BI, Matplotlib, Seaborn, or Plotly to present insights effectively. Strong knowledge of MLOps practices, including CI/CD pipelines, model deployment, monitoring, and lifecycle management. Ability to engage with business stakeholders, gather requirements, and deliver actionable insights aligned with business goals. Experience in mentoring junior data scientists/analysts, leading projects, and contributing to knowledge-sharing across teams. Continuous learner with strong problem-solving, communication, and leadership skills, staying updated with the latest trends in AI/ML and data science. Nice to have N/A Other Languages English: B2 Upper Intermediate Seniority Senior Gurugram, India Req. VR-117421 Data Science BCM Industry 11/09/2025 Req. VR-117421

Posted 4 days ago

Apply

1.0 years

2 - 4 Lacs

india

On-site

Job Title: Data Scientist – Python Location: Bhopal (Onsite) Experience: 1 to 3 Years Employment Type: Full-time About Us: At Brandsmashers Tech , we’re passionate about leveraging data to transform industries and empower decision-making. We work across domains like e-commerce, healthcare, agriculture, and education, delivering scalable, data-driven solutions. Join us and be part of a team shaping the future with technology and insights. Job Summary: We are looking for a Data Scientist proficient in Python to join our analytics and AI team. The ideal candidate will have hands-on experience with data preprocessing, statistical analysis, machine learning, and data visualization. You’ll play a key role in deriving actionable insights from data and developing intelligent solutions that drive business value. Key Responsibilities: Analyze large datasets to discover trends, patterns, and insights. Develop, test, and deploy machine learning models using Python libraries (e.g., scikit-learn, TensorFlow, PyTorch). Perform data cleaning, transformation, and feature engineering. Design and implement predictive models and algorithms. Visualize data insights using tools like Matplotlib, Seaborn, Plotly, or Power BI. Collaborate with cross-functional teams including engineering, product, and marketing to integrate data science solutions. Communicate findings to stakeholders through presentations and reports. Stay updated with latest advancements in data science and machine learning. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or related field. 2+ years of hands-on experience in data science or applied machine learning. Strong proficiency in Python and libraries like Pandas, NumPy, Scikit-learn, and Matplotlib. Experience with ML frameworks like TensorFlow, Keras, or PyTorch. Knowledge of SQL and data querying tools. Experience with model deployment is a plus (e.g., using Flask, FastAPI). Excellent problem-solving skills and analytical mindset. Strong communication and collaboration skills. Preferred Qualifications: Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of big data tools (Spark, Hadoop) is a plus. Understanding of MLOps and automated pipelines. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Govindpura, Bhopal, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 3 years (Required) Data science: 2 years (Required) Language: English (Preferred) Work Location: In person

Posted 4 days ago

Apply

4.0 years

0 Lacs

gurugram, haryana, india

On-site

Job role : Data Scientist Years of experience(in years) : 4 years Location : On Site, Gurgaon Notice Period- Immediate to 15 days Responsibilities ● Analyze large datasets to identify trends, patterns, and insights that can drive business decisions. ● Develop, implement, and maintain predictive models and machine learning algorithms to solve complex business problems. ● Collaborate with cross-functional teams to understand their data needs and provide actionable insights. ● Design and implement data processing pipelines to collect, clean, and preprocess data for analysis. ● Communicate findings and insights through reports, dashboards, and presentations to stakeholders. ● Stayupdated with the latest developments in data science, machine learning, and artificial intelligence. ● Ensure data integrity, accuracy, and security in all data science processes What we need ● Proficiency in Python and its libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, etc. ● Strong experience with data analysis, statistical modeling, and machine learning techniques. ● Experience in data visualization tools like Matplotlib, Seaborn, Plotly, or similar. ● Ability to work with large datasets and databases (SQL, NoSQL). ● Strong problem-solving skills and the ability to think analytically and critically. ● Excellent communication skills to effectively present findings and recommendations to non-technical stakeholders. ● Ability to work collaboratively in a team environment and manage multiple projects simultaneously. Qualifications ● Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, ● Mathematics, or a related field. ● 4+years of experience in data science, analytics, or a related field. ● Proven experience in the aviation industry is a plus. ● Strong proficiency in Python and its data science libraries. ● Experience with cloud platforms (AWS, Azure, GCP) is preferred. ● Excellent communication and presentation skills. ● Strong analytical mindset and attention to detail.

Posted 5 days ago

Apply

0.0 - 2.0 years

0 - 0 Lacs

govindpura, bhopal, madhya pradesh

On-site

Job Title: Data Scientist – Python Location: Bhopal (Onsite) Experience: 1 to 3 Years Employment Type: Full-time About Us: At Brandsmashers Tech , we’re passionate about leveraging data to transform industries and empower decision-making. We work across domains like e-commerce, healthcare, agriculture, and education, delivering scalable, data-driven solutions. Join us and be part of a team shaping the future with technology and insights. Job Summary: We are looking for a Data Scientist proficient in Python to join our analytics and AI team. The ideal candidate will have hands-on experience with data preprocessing, statistical analysis, machine learning, and data visualization. You’ll play a key role in deriving actionable insights from data and developing intelligent solutions that drive business value. Key Responsibilities: Analyze large datasets to discover trends, patterns, and insights. Develop, test, and deploy machine learning models using Python libraries (e.g., scikit-learn, TensorFlow, PyTorch). Perform data cleaning, transformation, and feature engineering. Design and implement predictive models and algorithms. Visualize data insights using tools like Matplotlib, Seaborn, Plotly, or Power BI. Collaborate with cross-functional teams including engineering, product, and marketing to integrate data science solutions. Communicate findings to stakeholders through presentations and reports. Stay updated with latest advancements in data science and machine learning. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or related field. 2+ years of hands-on experience in data science or applied machine learning. Strong proficiency in Python and libraries like Pandas, NumPy, Scikit-learn, and Matplotlib. Experience with ML frameworks like TensorFlow, Keras, or PyTorch. Knowledge of SQL and data querying tools. Experience with model deployment is a plus (e.g., using Flask, FastAPI). Excellent problem-solving skills and analytical mindset. Strong communication and collaboration skills. Preferred Qualifications: Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of big data tools (Spark, Hadoop) is a plus. Understanding of MLOps and automated pipelines. Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Ability to commute/relocate: Govindpura, Bhopal, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 3 years (Required) Data science: 2 years (Required) Language: English (Preferred) Work Location: In person

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an AI/ML Developer, you will be responsible for utilizing programming languages such as Python for AI/ML development. Your proficiency in libraries like NumPy, Pandas for data manipulation, Matplotlib, Seaborn, Plotly for data visualization, and Scikit-learn for classical ML algorithms will be crucial. Familiarity with R, Java, or C++ is a plus, especially for performance-critical applications. Your role will involve building models using Machine Learning & Deep Learning Frameworks such as TensorFlow and Keras for deep learning, PyTorch for research-grade and production-ready models, and XGBoost, LightGBM, or CatBoost for gradient boosting. Understanding model training, validation, hyperparameter tuning, and evaluation metrics like ROC-AUC, F1-score, precision/recall will be essential. In the field of Natural Language Processing (NLP), you will work with text preprocessing techniques like tokenization, stemming, lemmatization, vectorization techniques such as TF-IDF, Word2Vec, GloVe, and Transformer-based models like BERT, GPT, T5 using Hugging Face Transformers. Experience with text classification, named entity recognition (NER), question answering, or chatbot development will be required. For Computer Vision (CV), your experience with image classification, object detection, segmentation, and libraries like OpenCV, Pillow, and Albumentations will be utilized. Proficiency in pretrained models (e.g., ResNet, YOLO, EfficientNet) and transfer learning is expected. You will also handle Data Engineering & Pipelines by building and managing data ingestion and preprocessing pipelines using tools like Apache Airflow, Luigi, Pandas, Dask. Experience with structured (CSV, SQL) and unstructured (text, images, audio) data will be beneficial. Furthermore, your role will involve Model Deployment & MLOps where you will deploy models as REST APIs using Flask, FastAPI, or Django, batch jobs, or real-time inference services. Familiarity with Docker for containerization, Kubernetes for orchestration, and MLflow, Kubeflow, or SageMaker for model tracking and lifecycle management will be necessary. In addition, your hands-on experience with at least one cloud provider such as AWS (S3, EC2, SageMaker, Lambda), Google Cloud (Vertex AI, BigQuery, Cloud Functions), or Azure (Machine Learning Studio, Blob Storage) will be required. Understanding cloud storage, compute services, and cost optimization is essential. Your proficiency in SQL for querying relational databases (e.g., PostgreSQL, MySQL), NoSQL databases (e.g., MongoDB, Cassandra), and familiarity with big data tools like Apache Spark, Hadoop, or Databricks will be valuable. Experience with Git and platforms like GitHub, GitLab, or Bitbucket will be essential for Version Control & Collaboration. Familiarity with Agile/Scrum methodologies and tools like JIRA, Trello, or Asana will also be beneficial. Moreover, you will be responsible for writing unit tests and integration tests for ML code and using tools like pytest, unittest, and debuggers to ensure the quality of the code. This position is Full-time and Permanent with benefits including Provident Fund and Work from home option. The work location is in person.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

As a Senior Scientist specializing in Process Modeling and Hybrid Modelling within the Manufacturing Intelligence (MI) team of Pfizer's Global Technology & Engineering (GT&E), your role will involve the development and implementation of advanced analytics solutions such as AI/ML, soft sensor, advanced process control, and process condition monitoring in support of manufacturing operations at Pfizer Global Supply (PGS). **Responsibilities:** - Make technical contributions to high-impact projects requiring expertise in data analytics, advanced modeling, and optimization. - Identify valuable opportunities for applying Advanced Analytics, Advanced Process Control (APC), Artificial Intelligence (AI), Machine Learning (ML), and Industrial Internet of Things (IIoT) in manufacturing settings. - Develop mathematical and machine learning models and assist in the GMP implementation of analytics solutions. - Utilize engineering principles and modeling tools to improve process understanding and enable real-time process monitoring and control. - Collaborate with cross-functional teams, communicate progress effectively to management, and drive project advancement. **Basic Qualifications:** - Preferably a B. Tech in computer science. - Must possess expert-level knowledge in Python; familiarity with R, Matlab, or JavaScript is a plus. - Proficiency in performing data engineering on large, real-world datasets. - Demonstrated experience in applying data science and machine learning methodologies to drive insights and decision-making. - Ability to work collaboratively in diverse teams and communicate effectively with technical and non-technical stakeholders. - Knowledge of Biopharmaceutical Manufacturing processes and experience in technical storytelling. **Preferred Qualifications:** - Expertise in first principles such as thermodynamics, reaction modeling, and hybrid modeling. - Experience with Interpretable Machine Learning or Explainable AI and familiarity with Shapley values. - Proficiency in cloud-based development environments like AWS SageMaker and data warehouses like Snowflake or Redshift. - Understanding of feedback control algorithms, real-time communication protocols, and industrial automation platforms. - Experience in data visualization tools like Streamlit, Plotly, or Spotfire. - Knowledge of Cell Culture, Fermentation, and Vaccines Conjugation. This role offers a flexible work location assignment, including the option to work remotely. Pfizer is an equal opportunity employer, adhering to all applicable equal employment opportunity legislation across jurisdictions.,

Posted 5 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

Role Overview: Enphase Energy is seeking a Quality Analyst to join their dynamic team in designing and developing next-gen energy technologies. As the Quality Analyst, your main responsibility will be to sample the right gateways from 4 million sites, create batches, monitor field upgrades, analyze product quality data, and interpret the data throughout the product lifecycle. You will work closely with various stakeholders to provide detailed analysis results. Key Responsibilities: - Understand Enphase products and factory releases to develop and maintain end-to-end field update data and visualization models. - Collaborate with cross-functional teams to understand key software releases, compatible hardware, and gaps to create data models for successful field upgrades. - Monitor field upgrades using data mining techniques and analytics tools like SQL, Excel, Power BI, and automate processes to identify and alert teams of any issues resulting from upgrades. - Automate generation and monitoring of metrics related to deployment for pre-post upgrade analysis using Python or other scripting languages. - Identify opportunities for leveraging data to drive product/process improvements by working with stakeholders across the organization. - Troubleshoot field upgrade issues, escalate to relevant teams, and drive corrective actions by modifying data models and implementing quality control measures. Qualifications Required: - Bachelor's degree in quantitative fields such as computer science, Data Science, electrical, or a related discipline with 10+ years of experience working with global stakeholders. - Proven experience as a Data Analyst or in a similar role, preferably in the energy or solar industry. - Strong proficiency in Python programming for data analysis, including libraries like Pandas, NumPy, Scikit-learn, Plotly, Seaborn, or Matplotlib. - Solid understanding of statistical concepts and experience with statistical analysis tools. - Experience with data warehousing concepts and proficiency in SQL for data extraction and manipulation. - Proficient in data visualization tools such as Excel, Incorta, Tableau, Power BI. - Excellent analytical, critical thinking, and problem-solving skills with the ability to translate complex data into actionable insights. - Strong attention to detail and commitment to delivering high-quality work within tight deadlines. - Excellent communication and presentation skills to effectively convey technical concepts to non-technical stakeholders. - Proactive and self-motivated with a passion for data analysis and the solar market.,

Posted 5 days ago

Apply

4.0 - 10.0 years

0 Lacs

chandigarh

On-site

As a Full Stack Developer Custom AI Applications at iCuerious, you will have the opportunity to design and develop interactive applications, AI-backed dashboards, and customized AI agents for specific business use cases. Your role will involve integrating AI/ML frameworks into production-grade applications, building scalable APIs and backend services, and ensuring seamless data integration between frontend, backend, and AI systems. Additionally, you will apply data analytics techniques to extract actionable insights and collaborate with business teams to translate client-facing needs into functional applications. Optimization of applications for high performance, scalability, and security will also be a key responsibility, along with contributing to architecture design and best practices. Key Responsibilities: - Design, develop, and deploy AI-powered dashboards and applications tailored to diverse use cases. - Integrate AI/ML frameworks (LangChain, Rasa, LlamaIndex, Hugging Face, etc.) into production-grade applications. - Build scalable APIs and backend services to support frontend and AI functionalities. - Ensure seamless data integration and flow between frontend, backend, and AI systems. - Apply data analytics techniques to extract actionable insights, visualize trends, and improve AI-driven decision-making. - Collaborate with business teams to understand client-facing needs and translate them into functional applications. - Optimize applications for high performance, scalability, and security. - Contribute to architecture design, code reviews, and best practices. Required Skills & Qualifications: - 4-10 years of experience in full stack development and/or AI/ML application development. - Proficiency in Python (preferred), and JavaScript/TypeScript (Node.js, React, or Next.js). - Strong understanding of frontend frameworks (React, Next.js, Angular, or Vue). - Hands-on experience with AI bot frameworks (Rasa, LangChain, Haystack, Dialogflow). - Familiarity with NLP, LLMs (OpenAI, HuggingFace, Anthropic) and integrating them into applications. - Strong knowledge of Data Analytics - data processing, visualization, and reporting (Pandas, NumPy, Plotly, Power BI, or Tableau). - Experience with databases (SQL/NoSQL) and API development. - Exposure to cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes). Bonus Skills: - Experience in building SaaS, RaaS (Research-as-a-Service), dashboards, or enterprise-grade applications. - UI/UX design capabilities for intuitive AI and data-driven applications. - Understanding of IP/legal domain data (preferred but not mandatory). What We Offer: - Opportunity to build intelligent AI applications and data analytics from the ground up. - Exposure to cutting-edge AI and automation technologies. - A collaborative, entrepreneurial environment where your work has a direct impact. - Competitive compensation with strong growth prospects as the division expands. Interested candidates are invited to apply by sending their resume to aditi.gupta@icuerious.com and hr@icuerious.com, or by applying directly via this job post. Shortlisted candidates will be contacted for the next stage of the hiring process.,

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

Role Overview: You will be a data scientist responsible for developing and deploying impactful data solutions using your expertise in machine learning and statistical analysis. Your role will involve designing and implementing predictive models, collaborating with stakeholders, and driving strategic decisions through experimentation and data-driven insights. Key Responsibilities: - Design and implement robust machine learning solutions including regression, classification, NLP, time series, and recommendation systems. - Evaluate and tune models using appropriate metrics like AUC, RMSE, precision/recall, etc. - Work on feature engineering, model interpretability, and performance optimization. - Partner with stakeholders to identify key opportunities where data science can drive business value. - Translate business problems into data science projects with clearly defined deliverables and success metrics. - Provide actionable recommendations based on data analysis and model outputs. - Conduct deep-dive exploratory analysis to uncover trends and insights. - Apply statistical methods to test hypotheses, forecast trends, and measure campaign effectiveness. - Design and analyze A/B tests and other experiments to support product and marketing decisions. - Automate data pipelines and dashboards for ongoing monitoring of model and business performance. Qualification Required: - Proficiency in Python and MySQL. - Experience with libraries/frameworks such as scikit-learn, Pandas, NumPy, Time series, ARIMA, Bayesian model, Market Mix model, Regression, XGBoost, LightGBM, TensorFlow, PyTorch. - Proficient in statistical techniques like hypothesis testing, regression analysis, and time-series analysis. - Familiarity with databases like PostgreSQL, BigQuery, and MySQL. - Knowledge of visualization tools like Plotly, Seaborn, and Matplotlib. - Experience in Gitlab, CI/CD pipelines is a plus. - Familiarity with cloud platforms like AWS, GCP, or Azure services is also advantageous. In case of any additional details of the company, they are not provided in the job description.,

Posted 5 days ago

Apply

0 years

0 Lacs

navi mumbai, maharashtra, india

On-site

Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Data Scientist Senior Data Scientist (London) Who is Mastercard? As a global technology company our mission at Mastercard is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview Our International, Open Banking, Product Value Proposition Team is looking for a Senior Data Scientist to develop and drive forward Mastercard’s ambitious, data-driven open banking solutions, through the skillful application of data science and a highly customer centric focus. The ideal candidate is motivated, intellectually curious, technically excellent, a great communicator and someone who will enjoy helping us build out our data science team and capabilities. Role As an individual contributor working within a growing data science team, you will take responsibility for developing market leading, innovative, analytical open banking solutions. Focused in the first instance on affordability/credit decisioning and identity/income verification use cases, you will help empower consumers and drive value creation across our client base. Specifically In This Position, You Will: In pursuit of highly valued, market leading, solutions and insights, apply a range of problem appropriate data science techniques to large data sets, from development to deployment support. Work closely with data engineers and developers to build and deploy interactive dashboards, providing the best, most engaging insights and UX for our clients. Communicate effectively with clients and stakeholders, ensuring their requirements are fully understood and met. Conduct effective customer trials to grow our open banking impact, supporting data specification, data processing/analysis and result generation/presentation. Be highly proactive in pursuit of product excellence. For example, by investigating/proposing new data sources, encouraging cross team working, managing projects to agreed schedules and looking to utilise new tools and techniques. Help to develop, implement and honour effective, engaging team methods to support rapid prototyping, reproducibility, productivity, automation, and appropriate data governance. Engage with the wider Mastercard data science community, sharing best practice, knowledge, and insights, in support of collaborative, fulfilling work and value creation. All About You To succeed in this role, you will have: An undergraduate degree or higher in Computer Science, Data Science, Econometrics, Mathematics, Statistics, or similar field of study. Multi-project, hands-on experience of the end-to-end data science process in relation to large, complex data. From problem framing to results communication and solution deployment, you will be able to demonstrate having played a key part in a range of successfully delivered projects. Real world experience of developing and deploying interactive dashboards based on Plotly’s Dash framework. Workplace Python coding experience, including a good knowledge of the principal Python Data Science / Machine Learning (ML) library ecosystem. Excellent written and oral communication skills for both technical & non-technical audiences. To Succeed In This Role, You Will Be: A highly engaged individual, evidenced through specific examples of collaboration, effective teamwork, successful independent work, and continued professional development. Additionally, The Ideal Candidate Can Demonstrate: Commercial, experience of successfully utilizing time series and natural language processing (NLP – especially in relation to topic modelling and named entity recognition) methods. A good working knowledge of supervised and un-supervised techniques is presumed. Practical knowledge/experience of solution deployment (data science pipelines, MLOps frameworks and libraries etc.). Experience of working in financial services with respect to consumer and/or business lending. Corporate Security Responsibility Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security. All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And Therefore, It Is Expected That The Successful Candidate For This Position Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. Corporate Security Responsibility All Activities Involving Access To Mastercard Assets, Information, And Networks Comes With An Inherent Risk To The Organization And, Therefore, It Is Expected That Every Person Working For, Or On Behalf Of, Mastercard Is Responsible For Information Security And Must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines.

Posted 5 days ago

Apply

0 years

0 Lacs

delhi, india

On-site

Description: As a Data Scientist at Encardio, you will analyze complex time-series data from devices such as accelerometers, strain gauges, and tilt meters. Your responsibilities will span data preprocessing, feature engineering, machine learning model development, and integration with real-time systems. You'll collaborate closely with engineers and domain experts to translate physical behaviours into actionable insights. This role is ideal for someone with strong statistical skills, experience in time-series modeling, and a desire to understand the real-world impact of their models in civil and industrial monitoring. Responsibilities Sensor Data Understanding & Preprocessing Clean, denoise, and preprocess high-frequency time-series data from edge devices. Handle missing, corrupted, or delayed telemetry from IoT sources. Develop domain knowledge of physical sensors and their behaviour (e.g., vibration patterns, strain profiles). Exploratory & Statistical Analysis Perform statistical and exploratory data analysis (EDA) on structured/unstructured sensor data. Identify anomalies, patterns, and correlations in multi-sensor environments. Feature Engineering Generate meaningful time-domain and frequency-domain features (e.g., FFT, wavelets). Implement scalable feature extraction pipelines. Model Development Build and validate ML models for: Anomaly detection (e.g., vibration spikes) Event classification (e.g., tilt angle breaches) Predictive maintenance (e.g., time-to-failure) Leverage traditional ML and deep learning and LLMs Deployment & Integration Work with Data Engineers to integrate models into real-time data pipelines and edge/cloud platforms. Package and containerize models (e.g., with Docker) for scalable deployment. Monitoring & Feedback Track model performance post-deployment and retrain/update as needed. Design feedback loops using human-in-the-loop or rule-based corrections. Collaboration & Communication Collaborate with hardware, firmware, and data engineering teams. Translate physical phenomena into data problems and insights. Document approaches, models, and assumptions for reproducibility. 🎯 Key Deliverables Reusable preprocessing and feature extraction modules for sensor data. Accurate and explainable ML models for anomaly/event detection. Model deployment artifacts (Docker images, APIs) for cloud or edge execution. Jupyter notebooks and dashboards (streamlit) for diagnostics, visualization, and insight generation. Model monitoring reports and performance metrics with retraining pipelines. Domain-specific data dictionaries and technical knowledge bases. Contribution to internal documentation and research discussions. Build deep understanding and documentation of sensor behavior and characteristics. 🔧 Technologies Languages & Libraries Python (NumPy, Pandas, SciPy, Scikit-learn, PyTorch/TensorFlow) Bash (for data ops & batch jobs) Signal Processing & Feature Extraction FFT, DWT, STFT (via SciPy, Librosa, tsfresh) Time-series modeling (sktime, statsmodels, Prophet) Machine Learning & Deep Learning Scikit-learn (traditional ML) PyTorch / TensorFlow / Keras (deep learning) XGBoost / LightGBM (tabular modeling) Data Analysis & Visualization Jupyter, Matplotlib, Seaborn, Plotly, Grafana (for dashboards) Model Deployment Docker (for containerizing ML models) FastAPI / Flask (for ML inference APIs) GitHub Actions (CI/CD for models) ONNX / TorchScript (for lightweight deployment) Data Engineering Integration Kafka (real-time data ingestion) S3 (model/data storage) Trino / Athena (querying raw and processed data) Argo Workflows / Airflow (model training pipelines) Monitoring & Observability Prometheus / Grafana (model & system monitoring)

Posted 5 days ago

Apply

4.0 - 14.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Description Key Responsibilities: Design and develop RESTful APIs using FastAPI or FlaskAPI Build and optimize server-side applications and RESTful APIs using frameworks like FastAPI , Flask , or Django Integrate front-end components with server-side logicCollaborate with data scientists and engineers to implement data pipelines using Pandas , NumPy , and Scikit-learn. Design, develop, and maintain efficient, reusable, and reliable Python code Automate data processing, querying, and analysis using Pandas , NumPy , Matplotlib , and Plotly Integrate and manage SQL and NoSQL databases (e.g., MongoDB, CRDB, PostgreSQL) Build and maintain CI/CD pipelines for continuous integration and deployment Collaborate with cross-functional teams to implement cloud-native solutions on Microsoft Azure Develop intelligent agents and chatbots using platforms like ChatGPT , Microsoft Copilot , and Copilot Studio Implement middleware orchestration using tools like Mulesoft and Microsoft BizTalk Apply object-oriented design principles and design patterns to build scalable, maintainable systems Contribute to system integration and interface development across cloud and enterprise platforms Participate in code reviews, testing, and performance optimization Job Description - Grade Specific Required Skills & Experience: Strong proficiency in Python and data manipulation libraries Experience with FastAPI or FlaskAPI for API development Solid understanding of RESTful APIs and web services Proficiency in SQL and NoSQL database technologies Familiarity with CI/CD tools and DevOps workflows Knowledge of object-oriented programming (OOP) , OOS , and OOD Strong grasp of data structures , algorithms , and problem-solving techniques Excellent interpersonal and communication skills team-oriented mindset Exposure to cloud computing , AI , machine learning , and low-code platforms is a plus Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, Mathematics, or related field 4-14 years of experience in software development, automation, and API engineering Qualifications

Posted 5 days ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment., 15 years full time education

Posted 6 days ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

On-site

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment., 15 years full time education

Posted 6 days ago

Apply

0 years

0 Lacs

hyderabad, telangana, india

On-site

Company Description Blend is building a scalable Media Mix Optimization (MMO) solution designed to help clients maximize the impact of their marketing investments. We are seeking a Data Scientist with strong expertise in media mix modeling, statistical modeling, and interactive application development to join our advanced analytics team. This role goes beyond model building you will design, implement, and productionize end-to-end solutions that integrate statistical rigor with business impact. The ideal candidate will have deep knowledge of marketing analytics, advanced Python skills, and hands-on experience with Streamlit or similar frameworks for interactive data applications. You will be central in creating robust pipelines, experimentation frameworks, and client-facing tools that directly inform media allocation decisions. Job Description Our MMO platform is an in-house initiative designed to empower clients with data-driven decision-making in marketing strategy. By applying Bayesian and frequentist approaches to media mix modeling , we are able to quantify channel-level ROI, measure incrementality, and simulate outcomes under varying spend scenarios. Key Components Of The Project Include Data Integration: Combining client first-party, third-party, and campaign-level data across digital, offline, and emerging channels into a unified modeling framework. Model Development: Building and validating media mix models (MMM) using advanced statistical and machine learning techniques such as hierarchical Bayesian regression, regularized regression (Ridge/Lasso), and time-series modeling. Scenario Simulation: Enabling stakeholders to forecast outcomes under different budget allocations through simulation and optimization algorithms. Deployment & Visualization: Using Streamlit to build interactive, client-facing dashboards for model exploration, scenario planning, and actionable recommendation delivery. Scalability: Engineering the system to support multiple clients across industries with varying data volumes, refresh cycles, and modeling complexities. Responsibilities Develop, validate, and maintain media mix models to evaluate cross-channel marketing effectiveness and return on investment. Engineer and optimize end-to-end data pipelines for ingesting, cleaning, and structuring large, heterogeneous datasets from multiple marketing and business sources. Design, build, and deploy Streamlit-based interactive dashboards and applications for scenario testing, optimization, and reporting. Conduct exploratory data analysis (EDA) and advanced feature engineering to identify drivers of performance. Apply Bayesian methods, regularization, and time-series analysis to improve model accuracy, stability, and interpretability. Implement optimization and scenario-planning algorithms to recommend budget allocation strategies that maximize business outcomes. Collaborate closely with product, engineering, and client teams to align technical solutions with business objectives. Present insights and recommendations to senior stakeholders in both technical and non- technical language. Stay current with emerging tools, techniques, and best practices in media mix modeling, causal inference, and marketing science. Qualifications Bachelor’s or Master’s degree in Data Science, Statistics, Computer Science, Applied Mathematics, or related field. Proven hands-on experience in media mix modeling, marketing analytics, or econometrics. Strong proficiency in Python and key data science libraries (pandas, NumPy, scikit-learn, statsmodels, PyMC or similar Bayesian frameworks). Experience with Streamlit or equivalent frameworks (Dash, Shiny) for building data- driven applications. Proficiency in SQL for querying, joining, and optimizing large-scale datasets. Solid foundation in statistical modeling, regression techniques, and machine learning. Strong problem-solving skills with the ability to structure ambiguous business problems into data-driven solutions. Excellent verbal and written communication skills to translate technical outputs into business decisions. Preferred Qualifications Experience with Bayesian hierarchical models, time-series decomposition, and marketing attribution approaches. Familiarity with cloud-based platforms (AWS, GCP, Azure) for data processing, model training, and deployment. Experience with data visualization tools beyond Streamlit (Tableau, Power BI, D3.js, Plotly). Exposure to big data ecosystems (Spark, Hadoop) for large-scale data processing. Knowledge of causal inference techniques (propensity scoring, uplift modeling, geo- experiments).

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies