Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Sr. Analyst - Marketing Measurement & Optimization Job Description: Qualifications: Bachelor's degree in Statistics, Mathematics, Computer Science, Engineering, or a related field. Proven 3-4 years of experience in a similar role. Strong analytical and problem-solving skills. Excellent communication and presentation skills. Skills: Proficiency in R (tidyverse, LME4/lmerTest, plotly/ggplot2), or Python, for data manipulation and modelling and visualization, and SQL (joins, aggregation, analytics functions) for data handling. Ability to handle & analyse marketing data and perform statistical tests. Experience with data visualization tools such as Tableau, PowerPoint, Excel. Strong storytelling skills and the ability to generate insights & recommendations. Responsibilities: Understand business requirements and suggest appropriate marketing measurement solutions (Media Mix Modelling, Multi Touch Attribution, etc.) Conduct panel data analysis using fixed effects, random effects, and mixed effects models. Perform econometric modelling, including model evaluation, model selection, and results interpretation. Understand, execute, and evaluate the data science modelling flow. Understand marketing, its objectives, and effectiveness measures such as ROI/ROAS. Familiarity with marketing channels, performance metrics, and conversion funnel. Experience with media mix modelling, ad-stock effect, saturation effect, multi-touch attribution, rule-based attribution, and media mix optimization. Knowledge of Bayes’ theorem, Shapley value, Markov chain, response curve, marginal ROI, halo effect, and cannibalization. Experience handling marketing data and performing data QA & manipulation tasks such as joins/merge, aggregation & segregation, append. Location: DGS India - Pune - Baner M- Agile Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 2 weeks ago
40.0 years
3 - 9 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-208863 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Feb. 28, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: In this role, you will design, build and maintaindata lake solutions for scientific data thatdrive business decisions for Research. You will build scalable and high-performance data engineering solutionsfor large scientific datasetsand collaborate with Research stakeholders.The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates strong technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimizelarge datasets for query performance Collaborate with global cross-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve [complex] data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions Basic Qualifications and Experience: Doctorate Degree OR Master’s degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 3+ years of experience in implementing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Strong understanding of data modeling, data warehousing, and data integration concepts Strong experience using RDBMS (e.g.Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining technical documentation in Confluence Understanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for India’s top 1% Data Scientists for a unique job opportunity to work with the industry leaders. Who can be a part of the community? We are looking for top-tier Data Scientists with expertise in predictive modeling, statistical analysis, and A/B testing. If you have experience in this field then this is your chance to collaborate with industry leaders. What’s in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 - 12 months , or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote (Highly likely) Onsite on client location Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Lead design, development, and deployment of scalable data science solutions optimizing large-scale data pipelines in collaboration with engineering teams. Architect advanced machine learning models (deep learning, RL, ensemble) and apply statistical analysis for business insights. Apply statistical analysis, predictive modeling, and optimization techniques to derive actionable business insights. Own the full lifecycle of data science projects—from data acquisition, preprocessing, and exploratory data analysis (EDA) to model development, deployment, and monitoring. Implement MLOps workflows (model training, deployment, versioning, monitoring) and conduct A/B testing to validate models. Required Skills: Expert in Python, data science libraries (Pandas, NumPy, Scikit-learn), and R with extensive experience with machine learning (XGBoost, PyTorch, TensorFlow) and statistical modeling. Proficient in building scalable data pipelines (Apache Spark, Dask) and cloud platforms (AWS, GCP, Azure). Expertise in MLOps (Docker, Kubernetes, MLflow, CI/CD) along with strong data visualization skills (Tableau, Plotly Dash) and business acumen. Nice to Have: Experience with NLP, computer vision, recommendation systems, or real-time data processing (Kafka, Flink). Knowledge of data privacy regulations (GDPR, CCPA) and ethical AI practices. Contributions to open-source projects or published research. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching and Project Allocation: Be patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
India
On-site
Responsibilities Build and maintain responsive, high-performance frontends for our AI-driven applications Work closely with backend and AI/ML engineers to integrate APIs, multi-agent systems, and real-time data sources Design intuitive interfaces that make multi-agent systems accessible and user-friendly Develop reusable components and design systems to support rapid iteration and future scalability Optimize frontend performance, ensuring fast load times and smooth interactions Work collaboratively to refine product architecture, user flows, and research-based features Prototype, test, and refine user experiences with quick feedback cycles Implement best practices for accessibility, cross-browser compatibility, and security Requirements 5+ years of experience in frontend development Expertise in JavaScript/TypeScript, React (or similar modern frameworks) Proven experience integrating complex APIs and data-driven UIs Strong understanding of UX/UI principles and building intuitive, engaging interfaces Familiarity with real-time data, dynamic research workflows, and interactive data visualizations Ability to work in a fast-paced, iterative environment with shifting priorities Self-starter mindset with a strong product sense and passion for delightful user experiences Bonus Points Experience working at early-stage startups or as part of a technical founding team Exposure to AI/ML-driven applications, agent-based workflows, or knowledge graphs Familiarity with data visualization libraries (e.g., D3.js, Plotly, Visx) Experience building dashboards, admin tools, or research-focused interfaces Previous experience working with design systems or contributing to UX strategy Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Who We Are The next step of your career starts here, where you can bring your own unique mix of skills and perspectives to a fast-growing team. Metyis is a global and forward-thinking firm operating across a wide range of industries, developing and delivering AI & Data, Digital Commerce, Marketing & Design solutions and Advisory services. At Metyis, our long-term partnership model brings long-lasting impact and growth to our business partners and clients through extensive execution capabilities. With our team, you can experience a collaborative environment with highly skilled multidisciplinary experts, where everyone has room to build bigger and bolder ideas. Being part of Metyis means you can speak your mind and be creative with your knowledge. Imagine the things you can achieve with a team that encourages you to be the best version of yourself. We are Metyis. Partners for Impact. What We Offer Interact with C-level at our clients on regular basis to drive their business towards impactful change Lead your team in creating new business solutions Seize opportunities at the client and at Metyis in our entrepreneurial environment Become part of a fast growing international and diverse team What You Will Do Lead and manage the delivery of complex data science projects, ensuring quality and timelines. Engage with clients and business stakeholders to understand business challenges and translate them into analytical solutions. Design solution architectures and guide the technical approach across projects. Align technical deliverables with business goals, ensuring data products create measurable business value. Communicate insights clearly through presentations, visualizations, and storytelling for both technical and non-technical audiences. Promote best practices in coding, model validation, documentation, and reproducibility across the data science lifecycle. Collaborate with cross functional teams to ensure smooth integration and deployment of solutions. Drive experimentation and innovation in AI/ML techniques, including newer fields - Generative AI. What You’ll Bring 6+ years of experience in delivering full-lifecycle data science projects. Proven ability to lead cross-functional teams and manage client interactions independently. Strong business understanding with the ability to connect data science outputs to strategic business outcomes. Experience with stakeholder management, translating business questions into data science solutions. Track record of mentoring junior team members and creating a collaborative learning environment. Familiarity with data productization and ML systems in production, including pipelines, monitoring, and scalability. Experience managing project roadmaps, resourcing, and client communication. Tools & Technologies: Strong hands-on experience in Python/R and SQL. Good understanding and Experience with cloud platforms such as Azure, AWS, or GCP. Experience with data visualization tools in python like – Seaborn, Plotly. Good understanding of Git concepts. Good experience with data manipulation tools in python like Pandas and Numpy. Must have worked with scikit learn, NLTK, Spacy, transformers. Experience with dashboarding tools such as Power BI and Tableau to create interactive and insightful visualizations. Proficient in using deployment and containerization tools like Docker and Kubernetes for building and managing scalable applications. Core Competencies: Strong foundation in machine learning algorithms, predictive modeling, and statistical analysis. Good understanding of deep learning concepts, especially in NLP and Computer Vision applications. Proficiency in time-series forecasting and business analytics for functions like marketing, sales, operations, and CRM. Exposure to tools like – Mlflow, model deployment, API integration, and CI/CD pipelines. Hands on experience with MLOps and model governance best practices in production environments. Experience in developing optimization and recommendation system solutions to enhance decision-making, user personalization, and operational efficiency across business functions. Good to have: Generative AI Experience with text and Image data. Familiarity with LLM frameworks such as LangChain and hubs like Hugging Face. Exposure to vector databases (e.g., FAISS, Pinecone, Weaviate) for semantic search or retrieval-augmented generation (RAG). In a changing world, diversity and inclusion are core values for team well-being and performance. At Metyis, we want to welcome and retain all talents, regardless of gender, age, origin or sexual orientation, and irrespective of whether or not they are living with a disability, as each of them has their own experience and identity. Show more Show less
Posted 2 weeks ago
0 years
2 - 9 Lacs
Chennai
On-site
Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AI’s Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. What’s in it for you? pay above market standards The role is going to be contract based with project timelines from 2 - 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote Onsite on client location: US, UAE, UK, India etc. Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have: Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community? We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching: Be patient while we align your skills and preferences with the available project. 5 . Project Allocation: You’ll be deployed on your preferred project! Skip the Noise. Focus on Opportunities Built for You! Show more Show less
Posted 2 weeks ago
8.0 - 13.0 years
40 - 60 Lacs
Chennai
Hybrid
Role & responsibilities Strong knowledge of Probability Theory, Statistics, and a deep understanding of the Mathematics behind Machine Learning Proficiency with CRISP-ML(Q) or TDSP methodologies for addressing commercial problems through data science solutions Proficiency in Python for developing machine learning models and conducting statistical analyses Strong understanding of data visualization tools and techniques (e.g., Python libraries such as Matplotlib, Seaborn, Plotly, etc.) and the ability to present data effectively Specific technical requirements: Proficiency in SQL for data processing, data manipulation, sampling, and reporting Experience working with imbalanced datasets and applying appropriate techniques Experience with time series data, including preprocessing, feature engineering, and forecasting Experience with outlier detection and anomaly detection Experience working with various data types: text, image, and video data Familiarity with AI/ML cloud implementations (AWS, Azure, GCP) and cloud-based AI/ML services (e.g., Amazon SageMaker, Azure ML) Domain experience: Experience with analyzing medical signals and images Expertise in building predictive models for patient outcomes, disease progression, readmissions, and population health risks Experience in extracting insights from clinical notes, medical literature, and patient-reported data using NLP and text mining techniques Familiarity with survival or time-to-event analysis Expertise in designing and analyzing data from clinical trials or research studies Experience in identifying causal relationships between treatments and outcomes, such as propensity score matching or instrumental variable techniques Understanding of healthcare regulations and standards like HIPAA, GDPR (for healthcare data), and FDA regulations for medical devices and AI in healthcare Expertise in handling sensitive healthcare data in a secure, compliant way, understanding the complexities of patient consent, de-identification, and data sharing Familiarity with decentralized data models such as federated learning to build models without transferring patient data across institutions Knowledge of interoperability standards such as HL7, SNOMED, FHIR, or DICOM Ability to work with clinicians, researchers, health administrators, and policy makers to understand problems and translate data into actionable healthcare insights Preferred candidate profile Experience with MLOps, including integration of machine learning pipelines into production environments, Docker, and containerization/orchestration (e.g., Kubernetes) Experience in deep learning development using TensorFlow or PyTorch libraries Experience with Large Language Models (LLMs) and Generative AI applications Advanced SQL proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j)
Posted 2 weeks ago
4.0 - 6.0 years
10 - 15 Lacs
Chennai
Work from Office
What you'll be doing Lead development of advanced machine learning and statistical models Design scalable data pipelines using PySpark Perform data transformation and exploratory analysis using Pandas, Numpy and SQL Build, train and fine tune machine learning and deep learning models using TensorFlow and PyTorch Mentor junior engineers and lead code reviews, best practices and documentation. Designing and implementing big data, streaming AI/ML training and prediction pipelines. Translate complex business problems into data driven solutions. Promote best practices in data science, and model governance . Stay ahead with evolving technologies and guide strategic data initiatives. What we're looking for You'll need to have: Bachelor's degree or four or more years of work experience. Experience in Python, PySpark and SQL. Strong proficiency in Pandas, Numpy, Excel, Plotly, Matplotlib, Seaborn, ETL, AWS and Sagemaker Experience in Supervised learning models: Regression, Classification and Unsupervised learning models: Anomaly detection, clustering. Extensive experience with AWS analytics services, including Redshift, Glue, Athena, Lambda, and Kinesis. Knowledge in Deep Learning Autoencoders, CNN. RNN, LSTM, hybrid models Experience in Model evaluation, cross validation, hyper parameters tuning Familiarity with data visualization tools and techniques. Even better if you have one or more of the following: Experience with machine learning and statistical analysis. Experience in Hypothesis testing. Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders.
Posted 2 weeks ago
1.0 years
0 Lacs
Bengaluru, Karnataka
On-site
About us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . ExxonMobil’s affiliates in India ExxonMobil’s affiliates have offices in India in Bengaluru, Mumbai and the National Capital Region. ExxonMobil’s affiliates in India supporting the Product Solutions business engage in the marketing, sales and distribution of performance as well as specialty products across chemicals and lubricants businesses. The India planning teams are also embedded with global business units for business planning and analytics. ExxonMobil’s LNG affiliate in India supporting the upstream business provides consultant services for other ExxonMobil upstream affiliates and conducts LNG market-development activities. The Global Business Center - Technology Center provides a range of technical and business support services for ExxonMobil’s operations around the globe. ExxonMobil strives to make a positive contribution to the communities where we operate and its affiliates support a range of education, health and community-building programs in India. Read more about our Corporate Responsibility Framework. To know more about ExxonMobil in India, visit ExxonMobil India and the Energy Factor India. What role you will play in our team We are seeking candidates to tackle challenging problems across ExxonMobil’s global supply chain operations including sales and operations planning (S&OP), logistics and Materials management. The ideal candidate will understand both the commercial and technical aspects of supply chain challenges in the Oil & Gas or Energy products (including Chemicals, Fuels) industry, advancing the development of solutions leveraging advanced modeling, ML or AI. Job location is based out of Bangalore, Karnataka What you will do Support all stages of development for advanced analytics (AI, ML based decision support) solutions including Exploratory data analysis Develop optimized queries using SQL,Alteryx, python to pull large complex datasets from ERP (e.g., SAP, ORACLE) or other SC business applications (Kinaxis, Blue Yonder) for analysis Leverage analytical tools (PowerBI, Tableau, R, python or Dataiku) to identify patterns and generate data driven insights or product ideas that can lead to accelerated business decisions supporting global Supply chain capability team Development and deployment Collaborate with Digital Product Managers and technology development teams (including data engineers, data scientists) to develop, test and build scalable products leveraging AI and advanced analytical models. Work closely with business users (e.g. demand planners, material management) to gather feedback and ensure the analytical models and solutions are tailored for their operational needs Change Management & Adoption Develop and execute rollout plans for digital applications, including training and onboarding for key end users. Develop and track KPIs to monitor performance of predictive (e.g. forecasting models) or prescriptive (simulation, AI based decision tools) models and collaborate with data science teams to ensure models are regularly calibrated and validated. About You Skills and Qualifications Bachelor’s or Master’s degree in operations research or any engineering/science/supply chain discipline from a recognized university with GPA 7.0 Minimum 3 years of experience in a business/data analyst or related role primarily supporting global supply chain operations with familiarity of Supply chain concepts like demand planning, inventory optimization, S&OP, logistics, network design. At least 1 year of experience with analytical tools/advanced modeling such as python or R where you have transformed complex sets of data to generate business insights through data analysis, pattern identification or statistical modeling. Familiar with software product development frameworks like agile. At least 1 year of hands-on experience in developing visualization or analysis using analytics tool like power BI, or quicksight or tableau. At least 1 year of experience working in a supply chain function – planning, logistics, material management. Proven track record of developing and implementing data-driven analyses to improve operational efficiency. Strong understanding and hands-on experience with analytics and data visualization tools (e.g., Tableau, Power BI, plotly). Proficiency in programming languages such as SQL, Python or R to do data wrangling and modeling. Preferred knowledge Exposure to forecasting models, optimization or any supply chain design tools (e.g. Llamasoft) Exposure to machine learning, AI (NLP, LLMs) solutions used in Supply chain industry for predictive and prescriptive analytics. Experience contributing to the development of commercial-grade software applications for supply chain use cases Familiar with software testing and development practices (Agile) Ability to demonstrate initiative, teamwork, accuracy, effectiveness, and self-confidence Strong problem-solving skills and the ability to work independently and as part of a team. Excellent communication skills, both written and verbal. Your benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep: our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. Stay connected with us Learn more about ExxonMobil in India, visit ExxonMobil India and Energy Factor India. Follow us on LinkedIn and Instagram Like us on Facebook Subscribe our channel at YouTube EEO Statement ExxonMobil is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships.
Posted 2 weeks ago
1.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Relocation Assistance Offered Within Country Job Number #165135 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has chipped in to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the CBS Analytics team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The CBS Analytics vertical in Colgate Palmolive is passionate about working on cases which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead CBS Analytics projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Efficiency, Forecasting etc. Data Scientist needs to manage relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who Are You… You are a function expert - Leads Analytics projects within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built You connect the dots - Drive insights from internal and external data for business Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need Graduation/Masters in Statistics/ Applied Mathematics/ Computer Science 1+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Knowledge of using GitHub, Airflow for coding and model executions Leading, redefining, developing statistical models for RGM/Pricing and/or Marketing Efficiency and communicating insights decks to business Confirmed understanding on tools like Tableau, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What You’ll Need…(Preferred) Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Proven understanding of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/presenting data for partners using: Tableau, DOMO, pydash, plotly, d3.js, ggplot2, pydash, R Shiny etc Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
Lead AI Engineer Skills – Mandatory - Python, Data Science (AI/ML), SQL Skills - Primary - Python, Data Science (AI/ML), SQL Skills - Good to have - Python, Data Science (AI/ML), SQL Qualification - Bachelor’s Degree Total Experience - 5+ years Relevant Experience - 5+ years Work Location Cochin/TVM /Remote Candidate From Kerala , Tamil Nadu prefer More . Total and Relevant experience 5+ is a must Job Purpose Responsible for consulting for the client to understand their AI/ML, analytics needs & delivering AI/ML applications to the client. Job Description / Duties & Responsibilities ▪ Design intelligent data science solutions that delivers incremental value the end stakeholders ▪ Work closely with data engineering team in identifying relevant data and pre-processing the data to suitable models ▪ Work closely with the business intelligence team to build BI system and visualizations that delivers the insights of the underlying data science model in most intuitive ways possible. Job Specification / Skills and Competencies ▪ Masters/Bachelor’s in Computer Science or Statistics or Economics ▪ At least 4 years of experience working in Data Science field and is passionate about numbers, quantitative problems ▪ Deep understanding of Machine Learning models and algorithms ▪ Experience in analysing complex business problems, translating it into data science problems and modelling data science solutions for the same ▪ Understanding of and experience in one or more of the following Machine Learning algorithms:-Regression , Time Series ▪ Logistic Regression, Naive Bayes, kNN, SVM, Decision Trees, Random Forest, k-Means Clustering etc. ▪ NLP, Text Mining, LLM (GPTs) ▪ Deep Learning, Reinforcement learning algorithm ▪ Understanding of and experience in one or more of the machine learning frameworks -TensorFlow, Caffe, Torch etc. ▪ Understanding of and experience of building machine learning models using various packages in one or more of the programming languages Python. ▪ Knowledge & Experience on SQL, Relational Databases, No SQL Databases and Datawarehouse concepts ▪ Understanding of AWS/Azure Cloud architecture ▪ Understanding on the deployment architectures of AI/ML models (Flask, Azure function, AWS lambda) ▪ Knowledge on any BI and visualization tools is add-on (Tableau/PowerBI/Qlik/Plotly etc). ▪To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data scientists ▪ Desire for numbers and patterns Interested candidate please drop your resume to : gigin.raj@greenbayit.com MOB NO - 8943011666 Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Relocation Assistance Offered Within Country Job Number #166100 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our dependable household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has contributed to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the GLOBAL ANALYTICS team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this ever-changing landscape. What role will you play as a member of Colgate's Analytics team? The GLOBAL DATA SCIENCE & ADVANCED ANALYTICS vertical in Colgate Palmolive is focused on working on business cases which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead GLOBAL DATA SCIENCE & ADVANCED ANALYTICS projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Effectiveness, Forecasting etc. Data Scientist needs to manage relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who Are You… You are a function expert - Leads GLOBAL DATA SCIENCE & ADVANCED ANALYTICS within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built Deploys models using Airflow, Docker on Google Cloud Platforms You connect the dots - Merge multiple data sources and build Statistical Models / Machine Learning models in Price and Promo Elasticity Modeling, Marketing Mix Modeling to derive actionable business insights and recommendation Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need BE/BTECH [ Computer Science, Information Technology is preferred ], MBA or PGDM in Business Analytics / Data Science, Additional DS Certifications or Courses, MSC / MSTAT in Economics or Statistics 3+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as linear regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering, Bayesian Regression etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Strong Understanding of Cloud Frameworks Google Cloud, Snowflake and services like Kubernetes, Cloud Build, Cloud Run. Knowledge of using GitHub, Airflow for coding and model executions and model deployment on cloud platforms Working knowledge on tools like Looker, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What You’ll Need…(Preferred) Managing, transforming, and developing statistical models for RGM/Pricing and/or Marketing Effectiveness Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Working knowledge of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/presenting data for partners using: Looker, DOMO, pydash, plotly, d3.js, ggplot2, pydash, streamlit etc Willingness and ability to experiment with new tools and techniques Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. Show more Show less
Posted 3 weeks ago
1.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Relocation Assistance Offered Within Country Job Number #165136 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has chipped in to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the CBS Analytics team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The CBS Analytics vertical in Colgate Palmolive is passionate about working on reasons which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead CBS Analytics projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Efficiency, Forecasting etc. Data Scientist needs to handle relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who Are You… You are a function expert - Leads Analytics projects within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built You connect the dots - Drive insights from internal and external data for business Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need Graduation/Masters in Statistics/ Applied Mathematics/ Computer Science 1+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Knowledge of using GitHub, Airflow for coding and model executions Handling, redefining, developing statistical models for RGM/Pricing and/or Marketing Efficiency and communicating insights decks to business Validated understanding on tools like Tableau, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What You’ll Need…(Preferred) Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Shown understanding of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/communicating data for partners using: Tableau, DOMO, pydash, plotly, d3.js, ggplot2, pydash, R Shiny etc Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Share this job Mission Statement This role is responsible for leveraging data to drive business insights and decisions across various domains. As a Data Scientist, you will analyze large datasets to extract actionable insights, build predictive models, and support data-driven decision-making processes. You will work closely with cross-functional teams including engineering, product manage-ment, and business stakeholders to understand their data needs and deliver solutions that drive business value. You must possess strong analytical and problem-solving skills, and be proficient in working both independently and in teams. Your Responsibilities Analysing large datasets to identify trends, patterns, and insights that can inform business decisions Building and validating predictive models using statistical and machine learning techniques Creating clear and compelling visualizations to communicate findings to stakeholders Working closely with engineering, product management, and business teams to understand their data needs and deliver actionable insights Ensuring data quality and integrity by implementing best practices in data collection, processing, and storage Developing and maintaining dashboards and reports to track key metrics and performance indicators Staying up-to-date with the latest industry trends and technologies to continuously improve data science practices and methodologies Living Hitachi Energy’s core values of safety and integrity, which means taking responsibility for your own actions while caring for your colleagues and the business. Your Background Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, or a related field 10+ years of experience in data science or a related field Relevant certifications in data science or machine learning will be an added advantage Proficiency in programming languages such as Python or R Experience with data analysis and visualization tools such as SQL, Tableau, or Power BI Strong analytical and problem-solving skills, with the ability to interpret complex data and provide actionable insights Excellent communication and interpersonal skills to effectively collaborate with cross-functional teams and present findings to stakeholders Ability to manage multiple projects simultaneously and deliver results within deadlines Experience with machine learning frameworks such as TensorFlow, Keras, or Scikit-learn Familiarity with big data technologies such as Hadoop, Spark, or AWS Proficiency in data manipulation libraries such as Pandas and NumPy Experience with data visualization libraries such as Matplotlib, Seaborn, or Plotly Knowledge of natural language processing (NLP) libraries such as NLTK or SpaCy Understanding of deep learning frameworks such as PyTorch or MXNet Experience with cloud platforms such as AWS, Google Cloud, or Azure Knowledge of version control systems such as Git Familiarity with containerization technologies such as Docker and Kubernetes Strong attention to detail and organizational skills Ability to articulate and present ideas to senior management Problem-solving mindset with the ability to work independently and as part of a team Eagerness to learn and enhance knowledge unassisted Strong networking skills and global orientation Ability to coach and mentor team members Effective collaboration with internal and external stakeholders Adaptability to manage and lead transformational projects Proficiency in both spoken & written English language is required Apply now Location Bengaluru, Karnataka, India Job type Full time Experience Experienced Job function Engineering & Science Contract Regular Publication date 2025-05-26 Reference number R0069604 Show more Show less
Posted 3 weeks ago
7.0 - 12.0 years
22 - 27 Lacs
Hyderabad
Work from Office
Key Responsibilities Data Pipeline Development: Design, develop, and optimize robust data pipelines to efficiently collect, process, and store large-scale datasets for AI/ML applications. ETL Processes: Develop and maintain Extract, Transform, and Load (ETL) processes to ensure accurate and timely data delivery for machine learning models. Data Integration: Integrate diverse data sources (structured, unstructured, and semi-structured data) into a unified and scalable data architecture. Data Warehousing & Management: Design and manage data warehouses to store processed and raw data in a highly structured, accessible format for analytics and AI/ML models. AI/ML Model Development: Collaborate with Data Scientists to build, fine-tune, and deploy machine learning models into production environments. Focus on model optimization, scalability, and operationalization. Automation: Implement automation techniques to support model retraining, monitoring, and reporting. Cloud & Distributed Systems: Work with cloud platforms (AWS, Azure, GCP) and distributed systems to store and process data efficiently, ensuring that AI/ML models are scalable and maintainable in the cloud environment. Data Quality & Governance: Implement data quality checks, monitoring, and governance frameworks to ensure the integrity and security of the data being used for AI/ML models. Collaboration: Work cross-functionally with Data Science, Business Intelligence, and other engineering teams to meet organizational data needs and ensure seamless integration with analytics platforms. Required Skills and Qualifications Bachelor's or Masters Degree in Computer Science, Engineering, Data Science, or a related field. Strong proficiency in Python for AI/ML and data engineering tasks. Experience with AI/ML frameworks such as TensorFlow, PyTorch, Scikit-learn, and Keras. Proficient in SQL and working with relational databases (e.g., MySQL, PostgreSQL, SQL Server). Strong experience with ETL pipelines and data wrangling in large datasets. Familiarity with cloud-based data engineering tools and services (e.g., AWS (S3, Lambda, Redshift), Azure, GCP). Solid understanding of big data technologies like Hadoop, Spark, and Kafka for data processing at scale. Experience in managing and processing both structured and unstructured data. Knowledge of version control systems (e.g., Git) and agile development methodologies. Experience with data containers and orchestration tools such as Docker and Kubernetes. Strong communication skills to collaborate effectively with cross-functional teams. Preferred Skills Experience with Data Warehouses (e.g., Amazon Redshift, Google BigQuery, Snowflake). Familiarity with CI/CD pipelines for ML model deployment and automation. Familiarity with machine learning model monitoring and performance optimization. Experience with data visualization tools like Tableau, Power BI, or Plotly. Knowledge of deep learning models and frameworks. DevOps or MLOps experience for automating deployment of models. Advanced statistics or math background for improving model performance and accuracy.
Posted 3 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Requirements Work with a team to develop advanced analytic techniques to interrogate, visualize, interpret, and contextualize data and develop novel solutions to healthcare specific problems Implement a variety of analytics from data processing & QA to exploratory analytics and complex predictive models Understand client / product needs and translate them into tactical initiatives with defined goals and timelines Implement models using high level software packages (SKlearn, TensorFlow, PySpark, Databricks) Collaborate on software projects, providing analytical guidance and contributing to codebase Devises modeling and measuring techniques, and utilizes mathematics, statistical methods, engineering methods, operational mathematics techniques (linear programming, game theory, probability theory, symbolic language, etc.) and other principles and laws of scientific and economic disciplines to investigate complex issues, identify, and solve problems, and aid better decision making Plans, designs, coordinates and controls the progress of project work to meet client objectives; prepares and presents reports to clients Solves highly specialized technical objectives or problems without a pre-defined approach where the use of creative, imaginative solutions is required Synthesize raw data into digestible and actionable information. Identify specific research areas that merit investigation, develop new hypotheses and approaches for studies and evaluate the feasibility of such endeavors. Initiate, formulate, plan, execute and controls studies, which are designed for the purpose of identifying, analyzing, and reporting on healthcare related issues. Advise management on the selection of an appropriate study design, analysis, and in interpretation of study results. Work Experience BS/ MS in mathematics, physics, statistics, engineering, or similar discipline. Ph.D. preferred. Minimum of 5 years analytics/ Datascience experience Solid experience writing SQL queries Strong programming abilities Python (pandas, sklearn, numpy/ scipy, pyspark) Knowledge of statistical methods- regression, ANOVA, EDA, PCA, etc. Basic visualization skills- matplotlib/seaborn/plotly/etc. PowerBI experience highly preferred. Experience manipulating data sets through commercial and open source software (e.g. Redshift, Snowflake, Spark, Python, R, Databricks) Working knowledge of medical claims data (ICD-10 codes, HCPCS, CPT, etc.) Experience utilizing a range of analytics involving standard data in the Pharmaceutical Industry e.g. claims data from Symphony, IQVIA, Truven, Allscripts, etc. Must be comfortable conversing with the end-users Excellent analytical, verbal and communication skills Ability to thrive in a fast-paced, innovative environment Advanced Excel skills including (v-look-up, pivot tables, charts, graphing , and macros) Excellent documentation skills Excellent planning, organizational, and time management skills Ability to lead meetings and give presentation. Show more Show less
Posted 3 weeks ago
4.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities: • Develop an in-depth understanding of user journeys and generate data driven insights & recommendations to help product and customer success teams in meticulous decision-making. • Define and analyze key product data sets to understand customer and product behavior. • Work with stakeholders throughout the organization to identify opportunities for leveraging data to identify areas of growth and build strong data backed business cases around the same. • Perform statistical analysis/modelling on data and uncover hidden data patterns and correlations. • Perform feature engineering and develop and deploy predictive models/algorithms. • Coordinate with different teams to implement and deploy AI/ML driven models. • Conduct ad-hoc analysis around product areas for growth hacking and produce consumable reports for multiple business stakeholders. • Develop processes and tools to monitor and analyze model performance and data accuracy. Technical Skills : • At least 4-6 years of experience of working with real-world data and building statistical models. • Hands-on experience of programming with Python, including working knowledge of packages like Pandas, Numpy, SciPy, Scikit-Learn,Seaborn,Plotly etc. • Hands-on knowledge of SQL and Excel. • Deep understanding of key supervised and unsupervised ML algorithms – should be able to explain what is happening under the hood and their real-world advantages/drawbacks. • Strong foundation of statistics and probability theory. • Knowledge of advanced statistical techniques and concepts (properties of distributions, statistical tests, simulations, Markov chain etc.) and experience with applications. Other Skills: • Preferred Domain Experience: Gaming, E-Commerce, or any B2C experience. • Ability to break a problem into smaller chunks and design solution accordingly. • Ability to dive deeper into data, ask right questions, analyze with statistical methods and generate insights. • Ability to write modular, clean and well-documented code along with crisp design documents. • Strong communication and presentation skills. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
CryptoChakra is an industry-leading cryptocurrency analytics and education platform committed to simplifying digital asset markets for traders, investors, and institutions. By integrating advanced machine learning frameworks, real-time blockchain intelligence, and immersive learning ecosystems, we empower users to decode market volatility with precision. Our platform delivers AI-driven price forecasts, sentiment analysis tools, and smart contract audits, complemented by curated tutorials and risk management frameworks. Focused on predictive modeling, DeFi analytics, and educational excellence, we champion transparency, integrity, and cutting-edge technology to democratize crypto literacy for a global audience. As a remote-first innovator, we bridge the gap between complex blockchain data and actionable financial strategies. Position: Data Analyst Intern (Digital Assets) Remote | Full-Time Internship | Compensation: Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s analytics team to transform raw blockchain data into strategic insights that power predictive models and educational resources. This role offers hands-on experience in statistical analysis, machine learning, and crypto market research, with mentorship from industry experts. Key Responsibilities Data Analysis & Modeling: Process and analyze datasets from exchanges (CoinGecko, Binance) and blockchain explorers (Etherscan) using Python/R and SQL. Conduct statistical evaluations (regression, clustering) to identify trends in trading volumes, wallet activity, and NFT markets. Predictive Analytics Support: Assist in refining AI-driven models for price forecasting and DeFi risk assessment. Validate model accuracy against real-time market movements and on-chain metrics. Insight Communication: Create dashboards (Tableau, Power BI) and reports to translate findings into actionable strategies for traders and educators. Blockchain Metrics Decoding: Investigate smart contract interactions, gas fees, and liquidity pool dynamics to support educational content development. Qualifications Technical Skills Proficiency in Python/R for data manipulation (Pandas, NumPy) and basic machine learning (Scikit-learn). Strong understanding of statistics (hypothesis testing, probability distributions) and SQL/NoSQL databases. Familiarity with data visualization tools (Tableau, Plotly) and blockchain datasets (Etherscan, Dune Analytics) is a plus. Professional Competencies Analytical rigor to derive insights from unstructured data. Ability to articulate technical results to cross-functional teams in a remote setting. Self-driven with adaptability to Agile workflows and collaboration tools (Slack, Jira). Preferred (Not Required) Academic projects involving crypto market analysis, time-series forecasting, or NLP. Exposure to DeFi protocols (Uniswap, Aave) or cloud platforms (AWS, GCP). Pursuing or holding a degree in Data Science, Statistics, Computer Science, or related fields. What We Offer Skill Development: Master tools like TensorFlow, SQL, and blockchain analytics platforms. Portfolio Impact: Contribute to models and tutorials used by globalCryptoChakra users. Flexibility: Remote work with mentorship tailored to your learning goals. Certification & Recognition: LinkedIn endorsement and completion certificate for standout performers. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
CryptoChakra is a leading cryptocurrency analytics and education platform committed to decoding the complexities of digital asset markets for traders, investors, and institutions. By merging advanced machine learning frameworks, real-time blockchain intelligence, and immersive educational resources, we empower users to navigate market volatility with precision. Our platform leverages Python, TensorFlow, and AWS-powered infrastructure to deliver AI-driven price forecasts, risk assessment tools, and interactive tutorials that transform raw data into actionable strategies. As a remote-first innovator, we prioritize transparency, scalability, and inclusivity to redefine accessibility in decentralized finance. Position: Data Science Intern Remote | Full-Time Internship | Compensation: Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s data science team to refine predictive models, analyze blockchain trends, and contribute to tools used by thousands globally. This role offers hands-on experience in machine learning, sentiment analysis, and DeFi analytics, with mentorship from industry experts. Key Responsibilities Predictive Modeling: Develop and optimize ML algorithms (LSTM, Random Forest) for cryptocurrency price forecasting using historical and real-time blockchain data. Sentiment Analysis: Scrape and analyze social media (Twitter, Reddit) and news data to gauge market sentiment with NLP techniques. Blockchain Analytics: Decode on-chain metrics (wallet activity, gas fees) from explorers like Etherscan to identify market trends. Data Pipelines: Clean, preprocess, and structure datasets from exchanges (Binance, CoinGecko) for model training. Collaboration: Partner with engineers to deploy models into production and with educators to create data-backed tutorials. Qualifications Technical Skills Proficiency in Python/R for data manipulation (Pandas, NumPy) and machine learning (Scikit-learn, TensorFlow). Strong grasp of statistics (hypothesis testing, regression) and SQL/NoSQL databases. Familiarity with data visualization tools (Tableau, Plotly) and cloud platforms (AWS, GCP). Professional Competencies Analytical rigor to derive insights from unstructured datasets. Ability to communicate technical findings to non-technical stakeholders. Self-driven with adaptability to remote collaboration tools (Slack, Zoom). Preferred (Not Required) Academic projects involving time-series forecasting, clustering, or NLP. Exposure to blockchain fundamentals, DeFi protocols, or crypto APIs. Pursuing or holding a degree in Data Science, Computer Science, or related fields. What We Offer Skill Development: Master tools like PyTorch, Spark, and blockchain analytics platforms. Portfolio Impact: Contribute to models powering CryptoChakra’s predictions, used by 1M+ users. Flexibility: Remote-first culture with mentorship tailored to your learning pace. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You'll Be Doing Lead development of advanced machine learning and statistical models Design scalable data pipelines using PySpark Perform data transformation and exploratory analysis using Pandas, Numpy and SQL Build, train and fine tune machine learning and deep learning models using TensorFlow and PyTorch Mentor junior engineers and lead code reviews, best practices and documentation. Designing and implementing big data, streaming AI/ML training and prediction pipelines. Translate complex business problems into data driven solutions. Promote best practices in data science, and model governance. Stay ahead with evolving technologies and guide strategic data initiatives. What we're looking for You'll Need To Have Bachelor's degree or four or more years of work experience. Experience in Python, PySpark and SQL. Strong proficiency in Pandas, Numpy, Excel, Plotly, Matplotlib, Seaborn, ETL, AWS and Sagemaker Experience in Supervised learning models: Regression, Classification and Unsupervised learning models: Anomaly detection, clustering. Extensive experience with AWS analytics services, including Redshift, Glue, Athena, Lambda, and Kinesis. Knowledge in Deep Learning Autoencoders, CNN. RNN, LSTM, hybrid models Experience in Model evaluation, cross validation, hyper parameters tuning Familiarity with data visualization tools and techniques. Even better if you have one or more of the following: Experience with machine learning and statistical analysis. Experience in Hypothesis testing. Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders. If our company and this role sound like a fit for you, we encourage you to apply even if you don't meet every "even better" qualification listed above. #TPDRNONCDIO Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations Chennai, India Bangalore, India Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our teams use state-of-the-art technologies to identify, license, clean, and analyze data that many of the world’s largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc’s Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: As a Senior Data Platform Engineer, you will join at a pivotal time at the company to deploy mission-critical data pipelines and internal data tools that power our fastest-growing products. You are excited to augment the pipelines you develop by building interactive tools that analyze data outputs and provide step-level improvements to data refinement and accuracy You will scale data operations using a cutting-edge stack that leverages Gen AI, Spark, Databricks, and other leading Data technologies You are eager to optimize and automate the maintenance of our sophisticated data platform and provide technical guidance to platform users across the company About The Role: We are hiring a Senior Data Engineer (Internal, Official Title: Senior Data Platform Engineer) to report to the Chief Architect at YipitData. This newly established function is responsible for enabling data analysts and data entry teams to leverage innovative datasets and applications that power our fastest-growing analytical products. This is a unique, hybrid role that combines data engineering with some full-stack development, encompassing data modeling, ETL, and application development using lightweight Python frameworks such as Plotly Dash. As the pipelines and tools you will oversee are mission-critical, ongoing support, uptime maintenance, and onboarding new data team members are essential to making a strong impact in this role. You will work with sophisticated data pipelines and modern web applications that harness our petabyte-scale data lake. In addition, you will play a key role in our newest product, Signals, which delivers insights on software companies. Signals is growing rapidly, and this high-visibility position will contribute meaningfully by enabling the data operations behind the product. You will also be a core member of the Platform Engineering team, responsible for administering our Data & ML platform, supporting global teams during business hours, and researching and implementing process improvements and automation to reduce the team's support burden by leveraging the latest in data and GenAI technologies. As Our Senior Data Platform Engineer, You Will: Own and maintain core data pipelines that power strategic internal and external analytics products. Build lightweight data applications and tools on top of these pipelines using Python to streamline data refinement, transformation, and processing workflows. Drive reliability, efficiency, and performance improvements across the data platform. Diagnose and resolve technical issues in data applications and platform services, including web application performance, optimizing SQL, Pandas, and PySpark queries, and interacting with REST APIs. Partner with analysts, product teams, and engineering stakeholders to understand data requirements and translate them into scalable solutions. Identify and implement process improvements to streamline support workflows, reduce repetitive tasks, and improve application and data platform efficiency. Work with a modern tech stack that includes: Python, Databricks (Spark), Pandas, Plotly Dash, AWS, Github, and Docker. This is a fully-remote opportunity based in India. During onboarding, we expect working hours to be 2pm -11pm IST to allow for overlap with other teammates. Afterwards, standard work hours are from 11am to 8pm IST. You Are Likely To Succeed If: Have 5+ years of proven experience in data engineering, particularly in systems with high uptime requirements. Thrive in a fast-paced, evolving environment and can efficiently navigate multiple technologies and production services to troubleshoot and resolve issues. Eager to learn basic application development using Python frameworks and Databricks to automate analytical and data entry workflows Possess strong communication skills, responsiveness, attention to detail, a team-oriented mindset, and the ability to collaborate effectively with both technical and non-technical stakeholders. Show a track record of excellent problem-solving and debugging abilities, maintaining reliable codebases, and architecting efficient data processes. Are proficient in Python, Spark, Docker, AWS, and database technologies. (Experience with Pandas, Plotly Dash, Databricks, or REST APIs is a plus but not required.) What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our teams use state-of-the-art technologies to identify, license, clean, and analyze data that many of the world’s largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc’s Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: As a Senior Data Platform Engineer, you will join at a pivotal time at the company to deploy mission-critical data pipelines and internal data tools that power our fastest-growing products. You are excited to augment the pipelines you develop by building interactive tools that analyze data outputs and provide step-level improvements to data refinement and accuracy You will scale data operations using a cutting-edge stack that leverages Gen AI, Spark, Databricks, and other leading Data technologies You are eager to optimize and automate the maintenance of our sophisticated data platform and provide technical guidance to platform users across the company About The Role: We are hiring a Senior Data Pipeline Engineer (Internal, Official Title: Senior Data Platform Engineer) to report to the Chief Architect at YipitData. This newly established function is responsible for enabling data analysts and data entry teams to leverage innovative datasets and applications that power our fastest-growing analytical products. This is a unique, hybrid role that combines data engineering with some full-stack development, encompassing data modeling, ETL, and application development using lightweight Python frameworks such as Plotly Dash. As the pipelines and tools you will oversee are mission-critical, ongoing support, uptime maintenance, and onboarding new data team members are essential to making a strong impact in this role. You will work with sophisticated data pipelines and modern web applications that harness our petabyte-scale data lake. In addition, you will play a key role in our newest product, Signals, which delivers insights on software companies. Signals is growing rapidly, and this high-visibility position will contribute meaningfully by enabling the data operations behind the product. You will also be a core member of the Platform Engineering team, responsible for administering our Data & ML platform, supporting global teams during business hours, and researching and implementing process improvements and automation to reduce the team's support burden by leveraging the latest in data and GenAI technologies. As Our Senior Data Platform Engineer, You Will: Own and maintain core data pipelines that power strategic internal and external analytics products. Build lightweight data applications and tools on top of these pipelines using Python to streamline data refinement, transformation, and processing workflows. Drive reliability, efficiency, and performance improvements across the data platform. Diagnose and resolve technical issues in data applications and platform services, including web application performance, optimizing SQL, Pandas, and PySpark queries, and interacting with REST APIs. Partner with analysts, product teams, and engineering stakeholders to understand data requirements and translate them into scalable solutions. Identify and implement process improvements to streamline support workflows, reduce repetitive tasks, and improve application and data platform efficiency. Work with a modern tech stack that includes: Python, Databricks (Spark), Pandas, Plotly Dash, AWS, Github, and Docker. This is a fully-remote opportunity based in India. During onboarding, we expect working hours to be 2pm -11pm IST to allow for overlap with other teammates. Afterwards, standard work hours are from 11am to 8pm IST. You Are Likely To Succeed If: Have 5+ years of proven experience in data engineering, particularly in systems with high uptime requirements. Thrive in a fast-paced, evolving environment and can efficiently navigate multiple technologies and production services to troubleshoot and resolve issues. Eager to learn basic application development using Python frameworks and Databricks to automate analytical and data entry workflows Possess strong communication skills, responsiveness, attention to detail, a team-oriented mindset, and the ability to collaborate effectively with both technical and non-technical stakeholders. Show a track record of excellent problem-solving and debugging abilities, maintaining reliable codebases, and architecting efficient data processes. Are proficient in Python, Spark, Docker, AWS, and database technologies. (Experience with Pandas, Plotly Dash, Databricks, or REST APIs is a plus but not required.) What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our teams use state-of-the-art technologies to identify, license, clean, and analyze data that many of the world’s largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc’s Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: As a Senior Data Platform Engineer, you will join at a pivotal time at the company to deploy mission-critical data pipelines and internal data tools that power our fastest-growing products. You are excited to augment the pipelines you develop by building interactive tools that analyze data outputs and provide step-level improvements to data refinement and accuracy You will scale data operations using a cutting-edge stack that leverages Gen AI, Spark, Databricks, and other leading Data technologies You are eager to optimize and automate the maintenance of our sophisticated data platform and provide technical guidance to platform users across the company About The Role: We are hiring a Senior Data Analytics Engineer (Internal, Official Title: Senior Data Platform Engineer) to report to the Chief Architect at YipitData. This newly established function is responsible for enabling data analysts and data entry teams to leverage innovative datasets and applications that power our fastest-growing analytical products. This is a unique, hybrid role that combines data engineering with some full-stack development, encompassing data modeling, ETL, and application development using lightweight Python frameworks such as Plotly Dash. As the pipelines and tools you will oversee are mission-critical, ongoing support, uptime maintenance, and onboarding new data team members are essential to making a strong impact in this role. You will work with sophisticated data pipelines and modern web applications that harness our petabyte-scale data lake. In addition, you will play a key role in our newest product, Signals, which delivers insights on software companies. Signals is growing rapidly, and this high-visibility position will contribute meaningfully by enabling the data operations behind the product. You will also be a core member of the Platform Engineering team, responsible for administering our Data & ML platform, supporting global teams during business hours, and researching and implementing process improvements and automation to reduce the team's support burden by leveraging the latest in data and GenAI technologies. As Our Senior Data Platform Engineer, You Will: Own and maintain core data pipelines that power strategic internal and external analytics products. Build lightweight data applications and tools on top of these pipelines using Python to streamline data refinement, transformation, and processing workflows. Drive reliability, efficiency, and performance improvements across the data platform. Diagnose and resolve technical issues in data applications and platform services, including web application performance, optimizing SQL, Pandas, and PySpark queries, and interacting with REST APIs. Partner with analysts, product teams, and engineering stakeholders to understand data requirements and translate them into scalable solutions. Identify and implement process improvements to streamline support workflows, reduce repetitive tasks, and improve application and data platform efficiency. Work with a modern tech stack that includes: Python, Databricks (Spark), Pandas, Plotly Dash, AWS, Github, and Docker. This is a fully-remote opportunity based in India. During onboarding, we expect working hours to be 2pm -11pm IST to allow for overlap with other teammates. Afterwards, standard work hours are from 11am to 8pm IST. You Are Likely To Succeed If: Have 5+ years of proven experience in data engineering, particularly in systems with high uptime requirements. Thrive in a fast-paced, evolving environment and can efficiently navigate multiple technologies and production services to troubleshoot and resolve issues. Eager to learn basic application development using Python frameworks and Databricks to automate analytical and data entry workflows Possess strong communication skills, responsiveness, attention to detail, a team-oriented mindset, and the ability to collaborate effectively with both technical and non-technical stakeholders. Show a track record of excellent problem-solving and debugging abilities, maintaining reliable codebases, and architecting efficient data processes. Are proficient in Python, Spark, Docker, AWS, and database technologies. (Experience with Pandas, Plotly Dash, Databricks, or REST APIs is a plus but not required.) What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2