Jobs
Interviews

406 Plotly Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title : AI/ML Engineer / Junior Data Scientist Location : Bangalore / Pune Experience : 05 Years Employment Type : Full-Time Salary : 5 15 LPA (Based on experience and skillset) About The Role We are looking for a passionate and driven AI/ML Engineer or Junior Data Scientist to join our growing analytics and product team. Youll work closely with senior data scientists, engineers, and business stakeholders to build scalable AI/ML solutions, extract insights from complex datasets, and develop models that improve real-world decision-making. Whether you're a fresher with solid projects or a professional with up to 5 years of experience, if you're enthusiastic about AI/ML and data science, we want to hear from you! Key Responsibilities Collect, clean, preprocess, and analyze structured and unstructured data from multiple sources. Design, implement, and evaluate machine learning models for classification, regression, clustering, NLP, or recommendation systems. Collaborate with data engineers to deploy models in production (using Python, APIs, or cloud services like AWS/GCP). Visualize results and present actionable insights through dashboards, reports, and presentations. Conduct experiments, hypothesis testing, and A/B tests to optimize models and business outcomes. ? Develop scripts and reusable tools for automation and scalability of ML pipelines. Stay updated with the latest research papers, open-source tools, and trends in AI/ML. Required Skills & Qualifications Bachelors/Masters degree in Computer Science, Data Science, Mathematics, Statistics, or related fields. Strong Python programming skills with experience in libraries like NumPy, Pandas, Scikit-learn, TensorFlow, or PyTorch. Proficiency in data analysis, visualization (using tools like Matplotlib, Seaborn, Plotly, or Power BI/Tableau). Solid understanding of ML algorithms (linear regression, decision trees, random forests, SVMs, neural networks). Experience with SQL and working with large datasets. Exposure to cloud platforms (AWS, GCP, or Azure) and APIs is a plus. Knowledge of NLP, computer vision, or generative AI models is desirable. Strong problem-solving skills, attention to detail, and ability to work in agile teams. Good To Have (Bonus Points) Experience in end-to-end ML model lifecycle (development to deployment). Experience with MLOps tools like MLflow, Docker, or CI/CD. Participation in Kaggle competitions or open-source contributions. Certifications in Data Science, AI/ML, or Cloud Platforms. What We Offer A dynamic and collaborative work environment. Opportunities to work on cutting-edge AI projects. Competitive salary and growth path. Training, mentorship, and access to tools and resources. Flexible work culture and supportive teams. (ref:hirist.tech)

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Scientist at our organization, you will be applying your expertise in artificial intelligence by utilizing machine learning, data mining, and information retrieval to design, prototype, and construct advanced analytics engines and services for next-generation applications. Your role will involve collaborating with business partners to define technical problem statements and hypotheses, developing analytical models that align with business decisions, and integrating them into data products or tools with a cross-functional team. Your responsibilities will include: - Collaborating with business partners to devise innovative solutions using cutting-edge techniques and tools - Effectively communicating the analytics approach to address objectives - Advocating for data-driven decision-making and emphasizing the importance of problem-solving methodologies - Leading analytic approaches and integrating them into applications with data engineers, business leads, analysts, and developers - Creating scalable, dynamic, and interpretable models for analytic data products - Engineering features by leveraging internal and external data sources - Sharing your passion for Data Science within the enterprise community and contributing to the development of processes, frameworks, and standards - Collaborating, coaching, and learning with a team of experienced Data Scientists - Staying updated with the latest trends and ideas in the field through conferences and community engagements Desired Skills: - Bachelor's degree required, MS or PhD preferred - Bachelor's in Data Science, Computer Science, Engineering, Statistics, and 5+ years of experience OR Graduate degree in a quantitative discipline with demonstrated Data Science skills and 2+ years of work experience - Proficiency in Python for working with DataFrames - Proficiency in writing complex SQL queries - Experience with Machine Learning for clustering, classification, regression, anomaly detection, simulation, and optimization on large datasets - Ability to merge and transform disparate internal and external data sets to create new features - Experience with Big Data technologies such as Spark, Cloud AI platforms, and containerization - Experience in supporting deployment, monitoring, maintenance, and enhancement of models - Familiarity with data visualization tools like Tableau, Plotly, etc. Digital Experience/Individual Skills: - Excellent communication and collaboration skills to understand business needs and deliver solutions - Proven ability to prioritize tasks and manage time effectively to achieve outstanding results - Efficient learning capability to tackle new business domains and problems - Critical thinking and skepticism regarding the validity and biases of data - Intellectual curiosity and humility to leverage expertise and collaborate effectively - Strong organizational skills with attention to detail and accuracy - Ability to mentor, coach, and work closely with business partners, analysts, and team members,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

kerala

On-site

As a Data Analyst at our company based in Trivandrum, you will be responsible for analyzing large volumes of log data from NGINX server logs to identify user behavior patterns, anomalies, and security events. Your role will involve interpreting various fields such as IP addresses, geolocation data, user agents, request paths, status codes, and request times to derive meaningful insights. Collaboration with AI engineers is crucial as you will work together to propose relevant features based on log behavior and traffic patterns. Your responsibilities will include validating engineered features, conducting exploratory data analysis, and ensuring the quality and alignment of feature logic with real-world HTTP behavior and use cases. Furthermore, you will be involved in developing data visualizations to represent time-series trends, geo-distributions, and traffic behavior. Your collaboration with the frontend/dashboard team will be essential in defining and testing visual requirements and anomaly indicators for real-time dashboards. In addition to your analytical tasks, you will also be responsible for identifying and addressing gaps, inconsistencies, and errors in raw logs to ensure data quality. Creating documentation that explains observed behavioral patterns, feature assumptions, and traffic insights for knowledge sharing within the ML and security team will also be part of your role. The minimum qualifications for this position include a Bachelor's degree in Computer Science, Information Systems, Data Analytics, Cybersecurity, or a related field, along with at least 2 years of experience in data analysis or analytics roles. Proficiency in SQL, Elasticsearch queries, Python for data analysis, and experience working with web server logs or structured event data are required. Strong analytical thinking skills are essential to break down complex log behavior into patterns and outliers. It would be beneficial if you have familiarity with web security concepts, experience with log analytics platforms, an understanding of feature engineering concepts in ML pipelines, or experience working on anomaly detection or security analytics systems. This is a full-time position with benefits such as health insurance and Provident Fund, with a day shift schedule from Monday to Friday. If you possess the necessary qualifications and experience, we look forward to receiving your application.,

Posted 2 weeks ago

Apply

2.0 years

5 - 10 Lacs

Bengaluru

On-site

Company Description Louis Dreyfus Company is a leading merchant and processor of agricultural goods. Our activities span the entire value chain from farm to fork, across a broad range of business lines, we leverage our global reach and extensive asset network to serve our customers and consumers around the world. Structured as a matrix organization of six geographical regions and ten platforms, Louis Dreyfus Company is active in over 100 countries and employs approximately 18,000 people globally. Job Description Create and maintain optimal data pipeline architecture ; Assemble large, complex data sets that meet functional / non-functional business requirements; Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into trade flows, weather, crop condition studies. Work with stakeholders including the Agronomists, Data Scientists, Analysts, Traders and Design teams to assist with data-related technical issues and support their data infrastructure needs. QUALIFICATIONS TO PERFORM THE JOB Other Skills & competencies /Specific knowledge/ Abilities: Strong coding skills in SQL, C++, OOPS Engineering Graduate in Computer Science & equivalent preferred. Advanced working SQL & NOSQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. ; Project management, Hands on experience with data visualization tools like Power BI, Tableau, Matplotlib, Plotly, Dash application or any front-end application. Minimum years of experience & CTC : 2-4 years experience with right skillset. Reporting Line : Will be Reporting to the GM research Education : Graduation in computering engineering and Equivalent . Languages skills: Oral and Written Fluency in English is a must. Additional Information Additional Information for the job Diversity & Inclusion LDC is driven by a set of shared values and high ethical standards, with diversity and inclusion being part of our DNA. LDC is an equal opportunity employer committed to providing a working environment that embraces and values diversity, equity and inclusion. LDC encourages diversity, supports local communities and environmental initiatives. We encourage people of all backgrounds to apply. Sustainability Sustainable value is at the heart of our purpose as a company. We are passionate about creating fair and sustainable value, both for our business and for other value chain stakeholders: our people, our business partners, the communities we touch and the environment around us What We Offer We provide a dynamic and stimulating international environment, which will stretch and develop your abilities and channel your skills and expertise with outstanding career development opportunities in one of the largest and most solid private companies in the world. We offer A workplace culture that embraces diversity and inclusivity Opportunities for Professional Growth and Development Employee Recognition Program Employee Wellness Programs - Confidential access to certified counselors for employee and eligible family members, along with monthly wellness awareness sessions. Certified Great Place to Work

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDF’s APIs, Power BI, Grafana, or other front-end tools Work with Cognite’s capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion – SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Good to have: Experience with data visualization tools: Power BI, Grafana, Plotly, etc. Knowledge of cloud platforms (Azure, AWS, GCP) and infrastructure as code (Terraform, ARM) Prior experience with Cognite’s packaged solutions or apps (Asset Data Insight, Reliability, Integrity, etc.) Cognite Certifications (CDF Foundation / Developer / Architect) a strong plus EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDF’s APIs, Power BI, Grafana, or other front-end tools Work with Cognite’s capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion – SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Good to have: Experience with data visualization tools: Power BI, Grafana, Plotly, etc. Knowledge of cloud platforms (Azure, AWS, GCP) and infrastructure as code (Terraform, ARM) Prior experience with Cognite’s packaged solutions or apps (Asset Data Insight, Reliability, Integrity, etc.) Cognite Certifications (CDF Foundation / Developer / Architect) a strong plus EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDF’s APIs, Power BI, Grafana, or other front-end tools Work with Cognite’s capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion – SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Good to have: Experience with data visualization tools: Power BI, Grafana, Plotly, etc. Knowledge of cloud platforms (Azure, AWS, GCP) and infrastructure as code (Terraform, ARM) Prior experience with Cognite’s packaged solutions or apps (Asset Data Insight, Reliability, Integrity, etc.) Cognite Certifications (CDF Foundation / Developer / Architect) a strong plus EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDF’s APIs, Power BI, Grafana, or other front-end tools Work with Cognite’s capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion – SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Good to have: Experience with data visualization tools: Power BI, Grafana, Plotly, etc. Knowledge of cloud platforms (Azure, AWS, GCP) and infrastructure as code (Terraform, ARM) Prior experience with Cognite’s packaged solutions or apps (Asset Data Insight, Reliability, Integrity, etc.) Cognite Certifications (CDF Foundation / Developer / Architect) a strong plus EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDF’s APIs, Power BI, Grafana, or other front-end tools Work with Cognite’s capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion – SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Good to have: Experience with data visualization tools: Power BI, Grafana, Plotly, etc. Knowledge of cloud platforms (Azure, AWS, GCP) and infrastructure as code (Terraform, ARM) Prior experience with Cognite’s packaged solutions or apps (Asset Data Insight, Reliability, Integrity, etc.) Cognite Certifications (CDF Foundation / Developer / Architect) a strong plus EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Cognite Data Fusion Engineer / Consultant Industry: Oil & Gas / Energy / Manufacturing / Industrial Digital Transformation Key Responsibilities: Design, implement, and optimize data pipelines in Cognite Data Fusion (CDF) using Python SDK or CDF APIs Build and maintain data models (Asset Hierarchies, Time Series, Events, Files, Relationships) in CDF Ingest and contextualize data from OT systems (e.g., PI System, SCADA/DCS), IT systems (SAP PM, IBM Maximo), and engineering data Develop and orchestrate transformations using CDF Transformations (SQL / PySpark) Collaborate with SMEs and data scientists to develop use cases such as predictive maintenance, asset performance monitoring, and digital twins Implement access control, data lineage, and quality checks aligned with governance requirements Create dashboards, apps, or integrations using CDF’s APIs, Power BI, Grafana, or other front-end tools Work with Cognite’s capabilities such as Cognite Functions, Data Sets, CDF Charts, and Industrial Canvas Must Have: 4+ years of experience in data engineering or industrial data platforms Proven experience working with Cognite Data Fusion – SDK, APIs, or Fusion Workbench Strong skills in Python, SQL, and cloud data tools (Azure preferred) Understanding of industrial asset structures, time series data, maintenance logs, and equipment metadata Experience with data integration tools and protocols (OPC UA, Modbus, REST, MQTT, PI AF) Familiarity with industry verticals like Oil & Gas, Chemicals, Power Generation, or Manufacturing Excellent problem-solving, communication, and client engagement skills Good to have: Experience with data visualization tools: Power BI, Grafana, Plotly, etc. Knowledge of cloud platforms (Azure, AWS, GCP) and infrastructure as code (Terraform, ARM) Prior experience with Cognite’s packaged solutions or apps (Asset Data Insight, Reliability, Integrity, etc.) Cognite Certifications (CDF Foundation / Developer / Architect) a strong plus EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Job Title: Senior Data Scientist (Advanced Modeling & Machine Learning) Location: Remote Location Preference: We are specifically looking to hire talented individuals from Tier 2 and Tier 3 cities for this opportunity. Job Type: Full-time About the role We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in statistical modeling, machine learning, and natural language processing (NLP). This individual will work on advanced attribution models and predictive algorithms that power strategic decision-making across the business. The ideal candidate will have a Master’s degree in a quantitative field, 4–6 years of hands-on experience, and demonstrated expertise in building models from linear regression to cutting-edge deep learning and large language models (LLMs). A Ph.D. is strongly preferred. Responsibilities Responsible for analyzing the data, identifying patterns, and do a detailed EDA. Build and refine predictive models using techniques such as linear/logistic regression, XGBoost, and neural networks. Leverage machine learning and NLP methods to analyze large-scale structured and unstructured datasets. Apply LLMs and transformers to develop solutions in content understanding, summarization, classification, and retrieval. Collaborate with data engineers and product teams to deploy scalable data pipelines and model production systems. Interpret model results, generate actionable insights, and present findings to technical and non-technical stakeholders. Stay abreast of the latest research and integrate cutting-edge techniques into ongoing projects Required Qualifications Master’s degree in Computer Science, Statistics, Applied Mathematics, or a related field. 4–6 years of industry experience in data science or machine learning roles. Strong statistical foundation, with practical experience in regression modeling, hypothesis testing, and A/B testing. Hands-on knowledge of: > Programming languages : Python (primary), SQL, R (optional) > Libraries : pandas, NumPy, scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, spaCy, Hugging Face Transformers > Distributed computing : PySpark, Dask > Big Data and Cloud Platforms : Databricks, AWS Sagemaker, Google Vertex AI, Azure ML > Data Engineering Tools : Apache Spark, Delta Lake, Airflow > ML Workflow & Visualization : MLflow, Weights & Biases, Plotly, Seaborn, Matplotlib > Version control and collaboration : Git, GitHub, Jupyter, VSCode Preferred Qualifications Masters or Ph.D. in a quantitative or technical field. Experience with deploying machine learning pipelines in production using CI/CD tools. Familiarity with containerization (Docker) and orchestration (Kubernetes) in ML workloads. Understanding of MLOps and model lifecycle management best practices. Experience in real-time data processing (Kafka, Flink) and high-throughput ML systems. What We Offer Competitive salary and performance bonuses Flexible working hours and remote options Opportunities for continued learning and research Collaborative, high-impact team environment Access to cutting-edge technology and compute resources To apply, send your resume to jobs@megovation.io to be part of a team pushing the boundaries of data-driven innovation.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Python Developer, you will be responsible for leveraging your experience to build standard supervised (GLM ensemble techniques) and unsupervised (clustering) models using industry libraries such as pandas, sklearn, and keras. Your expertise in big data technologies like Spark and Dask, as well as databases including SQL and NoSQL, will be crucial in this role. Your role will also involve significant experience in Python, encompassing tasks such as writing unit tests, developing packages, and crafting reusable and maintainable code. An essential aspect of your responsibilities will be the ability to comprehend and articulate modeling techniques, along with visualizing analytical results using tools like matplotlib, seaborn, plotly, D3, and tableau. Experience with continuous integration/development tools like Jenkins and Spark ML pipelines will be advantageous. We are looking for a self-starter who not only excels individually but also collaborates effectively with colleagues, bringing innovative ideas to enhance our collective mindset. For the ideal candidate, possessing an advanced degree with a strong foundation in the mathematical principles underpinning machine learning, such as linear algebra and multivariate calculus, would be a significant advantage. Additionally, expertise in specialized areas like reinforcement learning, NLP, Bayesian techniques, or generative models would be highly valued. Your ability to effectively present ideas and analytical findings in a compelling manner that influences stakeholders will be a key aspect of this role. Demonstrated experience in developing analytical solutions within an industry context and a genuine passion for utilizing data science ethically to enhance customer-centricity in financial services will set you apart. If you are ready to contribute to a dynamic team by applying your Python development skills and data science expertise to drive impactful solutions in the financial services sector, we encourage you to explore this opportunity further.,

Posted 2 weeks ago

Apply

6.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

Calfus is a Silicon Valley headquartered software engineering and platforms company that seeks to inspire its team to rise faster, higher, stronger, and work together to build software at speed and scale. The company's core focus lies in creating engineered digital solutions that bring about a tangible and positive impact on business outcomes while standing for #Equity and #Diversity in its ecosystem and society at large. As a Data Engineer specializing in BI Analytics & DWH at Calfus, you will play a pivotal role in designing and implementing comprehensive business intelligence solutions that empower the organization to make data-driven decisions. Leveraging expertise in Power BI, Tableau, and ETL processes, you will create scalable architectures and interactive visualizations. This position requires a strategic thinker with strong technical skills and the ability to collaborate effectively with stakeholders at all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution that meets business requirements, leveraging tools such as Power BI and Tableau. - Data Integration: Oversee the ETL processes using SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Create and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Utilize SQL to write complex queries, stored procedures, and manage data transformations using joins and cursors. - Visualization Development: Lead the design of interactive dashboards and reports in Power BI and Tableau, adhering to best practices in data visualization. - Collaboration: Work closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyse and optimize BI solutions for performance, scalability, and reliability. - Data Governance: Implement best practices for data quality and governance to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts, fostering a culture of continuous learning and improvement. - Azure Databricks: Leverage Azure Databricks for data processing and analytics, ensuring seamless integration with existing BI solutions. Qualifications: - Bachelors degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong focus on Power BI and Tableau. - Proven experience with ETL processes and tools, especially SSIS. Strong proficiency in SQL Server, including advanced query writing and database management. - Exploratory data analysis with Python. - Familiarity with the CRISP-DM model. - Ability to work with different data models. - Familiarity with databases like Snowflake, Postgres, Redshift & Mongo DB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and/or Dash. - Strong programming foundation with Python for data manipulation and analysis using Pandas, NumPy, PySpark, data serialization & formats like JSON, CSV, Parquet & Pickle, database interaction, data pipeline and ETL tools, cloud services & tools, and code quality and management using version control. - Ability to interact with REST APIs and perform web scraping tasks is a plus. Calfus Inc. is an Equal Opportunity Employer.,

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Delhi, India

On-site

Role Expectations Data Collection and Cleaning : Collect, organize, and clean large datasets from various sources (internal databases, external APIs, spreadsheets, etc. Ensure data accuracy, completeness, and consistency by cleaning and transforming raw data into usable formats. Data Analysis Perform exploratory data analysis (EDA) to identify trends, patterns, and anomalies. Conduct statistical analysis to support decision-making and uncover insights. Use analytical methods to identify opportunities for process improvements, cost reductions, and efficiency enhancements. Reporting And Visualization Create and maintain clear, actionable, and accurate reports and dashboards for both technical and non-technical stakeholders. Design data visualizations (charts, graphs, and tables) that communicate findings effectively to decision-makers. Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc. Experience in generating the Descriptive , Predictive & prescriptive Insights with Gen AI using MS Copilot in PowerBI. Experience in Prompt Engineering & RAG Architectures. Prepare reports for upper management and other departments, presenting key findings and : Work closely with cross-functional teams (marketing, finance, operations, etc.) to understand their data needs and provide actionable insights. Collaborate with IT and database administrators to ensure data is accessible and well-structured. Provide support and guidance to other teams regarding data-related questions or issues. Data Integrity And Security Ensure compliance with data privacy and security policies and practices. Maintain data integrity and assist with implementing best practices for data storage and access. Continuous Improvement Stay current with emerging data analysis techniques, tools, and industry trends. Recommend improvements to data collection, processing, and analysis procedures to enhance operational efficiency. Qualifications Education : Bachelor's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field. A Master's degree or relevant certifications (e.g., in data analysis, business intelligence) is a plus. Experience Proven experience as a Data Analyst or in a similar analytical role (typically 7+ years). Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Strong knowledge of SQL and experience with relational databases. Familiarity with data manipulation and analysis tools (e.g., Python, R, Excel, SPSS). Worked on PowerBI , Tableau and Pythoin Libraries for Data visualization like matplotlib , seaborn , plotly , Pyplot , pandas etc. Experience with big data technologies (e.g., Hadoop, Spark) is a plus. Technical Skills Proficiency in SQL and data query languages. Knowledge of statistical analysis and methodologies. Experience with data visualization and reporting tools. Knowledge of data cleaning and transformation techniques. Familiarity with machine learning and AI concepts is an advantage (for more advanced roles). Soft Skills Strong analytical and problem-solving abilities. Excellent attention to detail and ability to identify trends in complex data sets. Good communication skills to present data insights clearly to both technical and non-technical audiences. Ability to work independently and as part of a team. Strong time management and organizational skills, with the ability to prioritize tasks effectively. (ref:hirist.tech)

Posted 2 weeks ago

Apply

8.0 years

7 - 9 Lacs

Chennai

Remote

Title: Senior Data Scientist Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai/Kovilpatti location for the initial Three-month project training and execution period. After the Three months , the candidate will be offered remote opportunities.* The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities · Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring · Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications · Conduct statistical analysis and hypothesis testing for business experimentation · Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) · Collaborate with data engineering teams to improve data pipelines and quality · Document methodologies, build reusable ML components, and publish technical artifacts · Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills · ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost · Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS · NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain · Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker · Data visualization: Power BI, Tableau, Plotly, Seaborn · SQL and NoSQL (CosmosDB, MongoDB) · Git, CI/CD tools, and model monitoring platforms Qualification · Master’s in Data Science, Statistics, Mathematics, or Computer Science · Microsoft Certified: Azure Data Scientist Associate or equivalent · Proven success in delivering production-ready ML models with measurable business impact · Publications or patents in AI/ML will be considered a strong advantage Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 12/07/2025

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Delhi, India

On-site

Job Type: Full Time Experience: 2 Years Type: Virtual Hiring Last Date: 20-July-2025 Posted on: 01-July-2025 Education: BE/B.Tech Digital India Corporation is currently inviting applications for the position of Developer – Data Analytics purely on Contract/ Consolidated basis for Poshan Tracker project. Location: Delhi, Noida, Others No. of Positions: 2 Qualifications: B. E / B. TECH / MCA or any Equivalent Degree Experience: 2 + years Required Skillset Hands on with any data analytics tools, Power BI or Tableau and Python libraries like Plotly and Folium to create dynamic data visualizations A track record of strong collaboration skills when building data solutions and Big Data Solutions. Proven ability to present progress and findings to stakeholders Support stakeholders in taking appropriate decisions both central and state decision makers. Creating automated anomaly detection systems and constant tracking of its performance as per program KPI. Knowledge of Python, R, QGIS, advanced Excel etc. Good written and oral communication skills. Good presentation and analytical ability. Experience of working for a government setup/ project is desirable. Knowledge of WHO Child Growth standards and anthropometric data quality checks as Poshan Tracker use these standards to determine nutritional status of children. Experience of analyzing big data and preparing region wise visualizations for effective decision making. Strong understanding of public policy to align data analytics with evidence-based decision making. Knowledge of AI/ML and capable of making data AI ready to be used for Current usage and Trend Predictions. Design and implement optimized data pipelines to clean, standardize, and transform beneficiary-level data from Poshan Tracker, ensuring its structured and ready for AI/ML applications. Collaborate closely with program teams, developers, and decision-makers to translate analytical needs into technical solutions, including the integration of important indicators into the Poshan Tracker dashboard. Lead the development of anomaly detection systems and dynamic reporting modules to flag irregularities in service delivery and identify high-risk regions for malnutrition. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Applying strong programming and problem-solving skills to develop scalable solutions. Chennai The last date for submission of applications shall be 20th July 2025. Note: This is an aggregated job, sharing with a motive to intimate relevant opportunities with job seekers. Hireclap is not responsible / authorized for this recruitment process. How To Apply The last date for submission of applications shall be 20th July 2025. Click Here For Job Details & Apply Online

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Associate Data Engineer – R&D Omics What You Will Do Let’s do this. Let’s change the world. In this vital role you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research stakeholders. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates strong technical skills, has experience with big data technologies, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Bachelor’s degree with 2 to 6 years of Computer Science, IT or related field experience Preferred Qualifications: 1+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Strong learning agility, ability to pick up new technologies used to support early drug discovery data analysis needs Collaborative with good communication skills. High degree of initiative and self-motivation. Ability to handle multiple priorities successfully. Team-oriented with a focus on achieving team goals. What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Title: Senior Data Scientist Years of Experience : 8+ years *Location: The selected candidate is required to work onsite at our Chennai/Kovilpatti location for the initial Three-month project training and execution period. After the Three months , the candidate will be offered remote opportunities.* The Senior Data Scientist will lead the development and implementation of advanced analytics and AI/ML models to solve complex business problems. This role requires deep statistical expertise, hands-on model building experience, and the ability to translate raw data into strategic insights. The candidate will collaborate with business stakeholders, data engineers, and AI engineers to deploy production-grade models that drive innovation and value. Key responsibilities · Lead end-to-end model lifecycle: data exploration, feature engineering, model training, validation, deployment, and monitoring · Develop predictive models, recommendation systems, anomaly detection, NLP models, and generative AI applications · Conduct statistical analysis and hypothesis testing for business experimentation · Optimize model performance using hyperparameter tuning, ensemble methods, and explainable AI (XAI) · Collaborate with data engineering teams to improve data pipelines and quality · Document methodologies, build reusable ML components, and publish technical artifacts · Mentor junior data scientists and contribute to CoE-wide model governance Technical Skills · ML Frameworks: Scikit-learn, TensorFlow, PyTorch, XGBoost · Statistical tools: Python (NumPy, Pandas, SciPy), R, SAS · NLP & LLMs: Hugging Face Transformers, GPT APIs, BERT, LangChain · Model deployment: MLflow, Docker, Azure ML, AWS Sagemaker · Data visualization: Power BI, Tableau, Plotly, Seaborn · SQL and NoSQL (CosmosDB, MongoDB) · Git, CI/CD tools, and model monitoring platforms Qualification · Master’s in Data Science, Statistics, Mathematics, or Computer Science · Microsoft Certified: Azure Data Scientist Associate or equivalent · Proven success in delivering production-ready ML models with measurable business impact · Publications or patents in AI/ML will be considered a strong advantage Job Types: Full-time, Permanent Work Location: Hybrid remote in Chennai, Tamil Nadu Expected Start Date: 12/07/2025

Posted 3 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Chennai

Work from Office

Job Summary We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri)

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Quadyster is an Information Technology Solutions provider focused on digital transformation. We specialize in web/mobile solutions, Artificial Intelligence, Machine Learning, Data Analytics, and Cloud Computing. We offer training sessions on various technology areas, including full-stack development, DevOps, cloud computing, and agile methodologies. Our primary customers are Fortune 100 commercial clients, including John Deere, the U.S. Department of Defense, and federal, state, and local governments. Quadyster won over 275 high-profile government contracts. Quadyster is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, age, veteran status, or disability status. Please send your resume in Microsoft Word to recruitment@quadyster.com Position: Data Engineer Location: Remote in India Must have 5 years of Software development experience Expected Salary: ₹60,000–₹1,00,000 per month Job Summary: Working knowledge of data management and transformation processes Strong experience in Azure SQL, MS SQL, SQL Queries, Procedures, and database modeling Should be able to maintain, monitor, and support data processing engines like Apache Spark Should be familiar with Databricks & Spark Structured Streaming & Azure Functions Should be able to do data visualization using graph libraries like plotly, ggplot, and others Experience with Machine Learning and understanding of statistical models Experience with Azure Data Factory or related tools Experience in Microsoft Fabric and Azure Data Lake is a must Good to have this: Hands-on experience with Python – Experience with R Experience with Power BI or related tools Ability to build high performing enterprise level CI/CD pipelines using Azure DevOps Mandatory Requirements: Experience building CI/CD pipelines with Azure DevOps Familiarity with Agile project management and sprint planning. Ability to work independently as well as collaboratively on cross-functional teams. Highly skilled in interpersonal and verbal/written communications, presentations, analytical and problem-solving skills. Experience working with large-scale data and a candidate who is good with working in uncertain environments Education:** Engineering in computer science; M.Tech is a plus. Experience:** 3+ Years of experience in Data Engineering 1+ years of experience in Microsoft Fabric and Azure Data Lake. 1+ years of experience in Machine Learning. 5+ years of experience working in an Agile environment

Posted 3 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Chepauk, Chennai, Tamil Nadu

On-site

The Visualization Expert – Impact-Based Forecasting creates intuitive dashboards and visual tools to enhance data accessibility for policymakers and stakeholders and to build and train machine learning models for data-driven applications in planning, early warning, and operational systems. To design and implement ML-based solutions to support intelligent decision support across sectors (e.g., climate services, disaster management. The role involves drawing from and contributing to multi-disciplinary datasets and working closely with a multi-disciplinary team within RIMES for generating IBF DSS, developing contingency plans, automating monitoring systems, contributing to Post-Disaster Needs Assessments (PDNA), and applying ML techniques for risk reduction. This position requires a strong understanding of meteorological, hydrological, vulnerability and exposure patterns, and translate data into actionable insights for disaster preparedness and resilience planning. Minimum Qualifications Education: ● Bachelor’s or Master’s degree in Data Science, Design, Information Systems or a related field. Knowledge Skills and Abilities: ● Design and implement clear, accurate, and engaging visualizations that support IBF platforms, dashboards, and reporting systems. ● Understanding of Data Visualization Best Practices: Deep knowledge of effective visualization types, common pitfalls, and ethical considerations (e.g., avoiding misleading visuals, ensuring accuracy, and respecting privacy). Ability to craft compelling narratives with data visualisations. ● Collaborate with cross-functional teams (data scientists, domain experts, UI/UX designers, and developers) to understand visualization needs and user goals as they apply to dashboards, reports, and interactive visualizations. ● Information Design: Ability to structure and organize complex information in a way that is easily digestible and understandable. This includes understanding cognitive load and how people process visual information. ● Programming Proficiency: JavaScript: D3.js, Chart.js, Highchairs, Leaflet/Mapbox (for geospatial visualizations). ● Python: Plotly, Dash, Bokeh, Seaborn, or Matplotlib. ● Design Tools: Working knowledge of tools like Figma, Adobe XD, or Sketch is a plus. ● Structure and organize complex data and indicators into meaningful information hierarchies that support storytelling and quick interpretation. ● Integrate visualizations into web applications, dashboards, or interactive platforms using frameworks like React, Vue, or others as needed. ● Prototype and iterate visual elements based on user feedback and usability testing. ● Ensure visualizations respect accessibility, privacy, and data governance considerations. ● Familiarity with early warning systems, disaster risk frameworks, and sector-specific IBF requirements is a strong asset. ● Excellent communication skills, especially in multidisciplinary and multicultural team settings. ● Proficiency in technical documentation and user training. ● Experience in multi-stakeholder projects and facilitating capacity-building programs. Experience: ● Minimum of 2 years of experience in Data Visualization. ● Minimum of 2 years of experience in data engineering, analytics, or IT systems for disaster management, meteorology, or climate services. ● Experience in multi-stakeholder projects and facilitating capacity-building programs. Personal Qualities: ● Excellent interpersonal skills; team-oriented work style; pleasant personality. ● Strong desire to learn and undertake new challenges. ● Creative problem-solver; willing to work hard. ● Analytical thinker with problem-solving skills. ● Strong attention to detail and ability to work under pressure. ● Self-motivated, adaptable, and capable of working in multicultural and multidisciplinary environments. ● Strong communication skills and the ability to coordinate with stakeholders. Major Duties and Responsibilities Impact-based forecasting ● Collaborate with other members of IT team, meteorologists, hydrologists, GIS specialists, and disaster risk Management experts within RIMES to ensure the development of IBF DSS ● Design visuals using Tableau, Power BI, or D3.js. ● Convert complex data into interactive graphics. ● Develop sector-specific dashboards (climate risk management/disaster risk management). ● Ensure visual accessibility and storytelling. ● Assist the RIMES Team in applying AI/ML models to forecast hazards and project likely impacts based on exposure and vulnerability indices. ● Work with forecasters and domain experts to automate the generation of impact-based products. ● Ensure data security, backup, and compliance with data governance and interoperability standards. ● Train national counterparts on the use and management of the AL, including analytics dashboards. ● Collaborate with GIS experts, hydromet agencies, and emergency response teams for integrated service delivery. ● Technical documentation on data architecture, models, and systems. Capacity Building and Stakeholder Engagement ● Facilitate training programs for team members and stakeholders, focusing on RIMES policies, regulations, and the use of forecasting tools. ● Develop and implement a self-training plan to enhance personal expertise, obtaining a trainer certificate as required. ● Prepare and implement training programs to enhance team capacity and submit training outcome reports. Reporting ● Prepare technical reports, progress updates, and outreach materials for stakeholders. ● Maintain comprehensive project documentation, including strategies, milestones, and outcomes. ● Capacity-building workshop materials and training reports. Other Responsibilities ● Utilize AI skills to assist in system implementation plans and decision support system (DSS) development. ● Utilize skills to assist in system implementation plans and decision support system (DSS) development. ● Assist in 24/7 operational readiness for client early warning systems such as SOCs, with backup support from RIMES Headquarters. ● Undertake additional tasks as assigned by the immediate supervisor or HR manager based on recommendations from RIMES technical team members and organisational needs. ● The above responsibilities are illustrative and not exhaustive. Undertake any other relevant tasks that may be needed from time to time. Contract Duration The contract will initially be for one year and may be extended based on the satisfactory completion of a 180-day probationary period and subsequent annual performance reviews. RIMES promotes diversity and inclusion in the workplace. Well-qualified applicants, particularly women, are encouraged to apply. Job Type: Full-time Pay: Up to ₹100,000.00 per month Schedule: Monday to Friday Ability to commute/relocate: Chepauk, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Do you have any experience or interest in working with international or non-profit organizations? Please explain. What are your salary expectations per month? Education: Bachelor's (Preferred) Experience: JavaScript: 2 years (Preferred) D3.js: 2 years (Preferred) Data visualization: 2 years (Preferred) Python: 2 years (Preferred) Data science: 2 years (Preferred) React: 2 years (Preferred) Vue.js: 2 years (Preferred) Location: Chepauk, Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Position Summary Job Description: Data Science Consultant At Deloitte Data Science & Machine Learning is one of the key drivers for our successful business growth. We are fully committed to delivering best class products and solution strategies. One such solution is People Prism , a family of solution s designed to solve challenges related to unique population identification, community activation and outreach, resource allocation, and policy and program efficacy . We are looking for a Data Science Consultant with 2+ years of machine-learning modeling experience . The ideal candidate will have direct, hands-on experience working with a team of data scientists to wrangle and visuali ze data , perform statistical analys e s , and buil d predictive machine - learning models using supervised, unsupervised, and semi-supervised techniques. The candidate should be a self- starter with and be capable of preparing scripts for model automation. Responsibilities: Develop models using techniques such as gradient boosting, logistic regression, multivariate analysis, k- means and DBSCAN clustering, PCA, and topic modeling powered by LLMs. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Apply analytical thinking and solve multiple challenges like data imbalance, overfitting, accuracy improvement etc. to improve model performance. Conduct entity resolution and data matching on large datasets to ensure data integrity and accuracy. Work on complex datasets, apply various statistical and data mining techniques for data exploration. Use data visualization to understand data distributions and patterns and communicate findings to project lead. Write near production-ready code that is efficient and scalable over large datasets. Ensure good code documentation practices to facilitate collaboration and future development. Work closely with product managers, engineers, and other stakeholders to translate business needs into data-driven solutions. Take initiative in exploring new data science techniques and tools to continuously improve our modeling capabilities. Contribute to the growth and success of the team by being a proactive and collaborative team member. Skills & Qualifications: Required: Strong mathematical and fundamental knowledge of Statistical and Machine Learning algorithms Strong programming skills in Python ( Numpy , Pandas, Scikit, Matplotlib, Seaborn, Plotly , etc . ) and SQL for data analysis, data wrangling, and database management. Strong understanding of gradient boosting , logistic regression and other classification algorithms. Experience in explainable AI, particularly SHAP. Deep knowledge of advanced analytics , data wrangling and Machine Learning Algorithms. Strong problem-solving skills with emphasis on product development. Ability to manage multiple projects at a time. Strong communication skills with the ability to convey complex concepts clearly. Should have Bachelor’s or Master’s degree in Engineering , Computer Science , Statistics , Mathematics or other quantitative field . Self-starter with the ability to take initiative and work independently in a small team environment. Preferred: Experience working in Google Cloud Platform Experience with entity resolution and data matching techniques. Knowledge of ML model deployment in any of the cloud services is appreciated Hands-on experience with prompt engineering and other GenAI or LLM-based applications like RAG, etc. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306445

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies