Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
5 - 15 Lacs
Jodhpur Char Rasta, Ahmedabad, Gujarat
On-site
Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Hello, Hope you are doing well. We are Hiring for the role of Sr. AI/ML/ Data Scientist role. Please revert me back with your updated resume at astha.jaiswal@techsciresearch.com if you are interested. Below are the details: Job Location: Noida Job Role: S r. AI/ML/ Data Scientist Overview: We are seeking a highly skilled and experienced AI/ML Expert to spearhead the design, development, and deployment of advanced artificial intelligence and machine learning models. This role requires a strategic thinker with a deep understanding of ML algorithms, model optimization, and production-level AI systems. You will guide cross-functional teams, mentor junior data scientists, and help shape the AI roadmap to drive innovation and business impact. This role involves statistical analysis, data modeling, and interpreting large sets of data. An AI/ML expert with experience in creating AI models for data intelligence companies who specializes in developing and deploying artificial intelligence and machine learning solutions tailored for data-driven businesses. This expert should possess a strong background in data analysis, statistics, programming, and machine learning algorithms, enabling them to design innovative AI models that can extract valuable insights and patterns from vast amounts of data. About the Role The design and development of a cutting-edge application powered by large language models (LLMs). This tool will provide market analysis and generate high-quality, data-driven periodic insights. You will play a critical role in building a scalable and intelligent system that integrates structured data, NLP capabilities, and domain-specific knowledge to produce analyst-grade content. Key Responsibilities Design and develop LLM-based systems for automated market analysis. Build data pipelines to ingest, clean, and structure data from multiple sources (e.g., market feeds, news articles, technical reports, internal datasets). Fine-tune or prompt-engineer LLMs (e.g., GPT-4.5, Llama, Mistral) to generate concise, insightful reports. Collaborate closely with domain experts to integrate industry-specific context and validation into model outputs. Implement robust evaluation metrics and monitoring systems to ensure quality, relevance, and accuracy of generated insights. Develop and maintain APIs and/or user interfaces to enable analysts or clients to interact with the LLM system. Stay up to date with advancements in the GenAI ecosystem and recommend relevant improvements or integrations. Participate in code reviews, experimentation pipelines, and collaborative research discussions. Qualifications Required: Strong fundamentals in machine learning, deep learning, and natural language processing (NLP) . Proficiency in Python , with hands-on experience using libraries such as NumPy, Pandas, and Matplotlib/Seaborn for data analysis and visualization. Experience developing applications using LLMs (both closed- and open-source models) . Familiarity with frameworks like Hugging Face Transformers, LangChain, LlamaIndex , etc. Experience building ML models (e.g., Random Forest, XGBoost, LightGBM, SVMs ), along with familiarity in training and validating models. Practical understanding of deep learning frameworks: TensorFlow or PyTorch . Knowledge of prompt engineering , Retrieval-Augmented Generation (RAG) , and LLM evaluation strategies. Experience working with REST APIs , data ingestion pipelines, and automation workflows. Strong analytical thinking, problem-solving skills, and the ability to convert complex technical work into business-relevant insights. Preferred: Familiarity with the chemical or energy industry , or prior experience in market research/analyst workflows . Exposure to frameworks such as OpenAI Agentic SDK, CrewAI, AutoGen, SmolAgent, etc. Experience deploying ML/LLM solutions to production environments (Docker, CI/CD) . Hands-on experience with vector databases such as FAISS, Weaviate, Pinecone , or ChromaDB . Experience with dashboarding tools and visualization libraries (e.g., Streamlit, Plotly, Dash, or Tableau). Exposure to cloud platforms (AWS, GCP, or Azure), including usage of GPU instances and model hosting services. About ChemAnalyst : ChemAnalyst is a digital platform, which keeps a real-time eye on the chemicals and petrochemicals market fluctuations, thus, enabling its customers to make wise business decisions. With over 450 chemical products traded globally, we bring detailed market information and pricing data at your fingertips. Our real-time pricing and commentary updates enable users to stay acquainted with new commercial opportunities. Each day, we flash the major happenings around the globe in our news section. Our market analysis section takes it a step further, offering an in-depth evaluation of over 15 parameters including capacity, production, supply, demand gap, company share and among others. Our team of experts analyse the factors influencing the market and forecast the market data for up to the next 10 years. We are a trusted source of information for our international clients, ensuring user-friendly and customized deliveries on time. Website : https://www.chemanalyst.com/
Posted 1 week ago
9.0 years
0 Lacs
Kochi, Kerala, India
Remote
Position : AI Architect -PERMANENT Only Experience : 9+ years (Relevant 8 years is a must) Budget : Up to ₹40–45 LPA Notice Period : Immediate to 45 days Key Skills : Python, Data Science (AI/ML), SQL Location - TVM/Kochi/remote Job Purpose Responsible for consulting for the client to understand their AI/ML, analytics needs & delivering AI/ML applications to the client. Job Description / Duties & Responsibilities ▪ Work closely with internal BU’s and business partners (clients) to understand their business problems and translate them into data science problems ▪ Design intelligent data science solutions that delivers incremental value the end stakeholders ▪ Work closely with data engineering team in identifying relevant data and pre-processing the data to suitable models ▪ Develop the designed solutions into statistical machine learning models, AI models using suitable tools and frameworks ▪ Work closely with the business intelligence team to build BI system and visualizations that delivers the insights of the underlying data science model in most intuitive ways possible. ▪ Work closely with application team to deliver AI/ML solutions as microservices Job Specification / Skills and Competencies ▪ Masters/Bachelor’s in Computer Science or Statistics or Economics ▪ At least 6 years of experience working in Data Science field and is passionate about numbers, quantitative problems ▪ Deep understanding of Machine Learning models and algorithms ▪ Experience in analysing complex business problems, translating it into data science problems and modelling data science solutions for the same ▪ Understanding of and experience in one or more of the following Machine Learning algorithms:-Regression , Time Series ▪ Logistic Regression, Naive Bayes, kNN, SVM, Decision Trees, Random Forest, k-Means Clustering etc. ▪ NLP, Text Mining, LLM (GPTs) ▪ Deep Learning, Reinforcement learning algorithm ▪ Understanding of and experience in one or more of the machine learning frameworks -TensorFlow, Caffe, Torch etc. ▪ Understanding of and experience of building machine learning models using various packages in one or more of the programming languages– Python / R ▪ Knowledge & Experience on SQL, Relational Databases, No SQL Databases and Datawarehouse concepts ▪ Understanding of AWS/Azure Cloud architecture ▪ Understanding on the deployment architectures of AI/ML models (Flask, Azure function, AWS lambda) ▪ Knowledge on any BI and visualization tools is add-on (Tableau/PowerBI/Qlik/Plotly etc). ▪To adhere to the Information Security Management policies and procedures. Soft Skills Required ▪ Must be a good team player with good communication skills ▪ Must have good presentation skills ▪ Must be a pro-active problem solver and a leader by self ▪ Manage & nurture a team of data scientists ▪ Desire for numbers and patterns
Posted 1 week ago
1.0 - 9.0 years
0 - 2 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Develop, train, and evaluate machine learning models using Python, scikit-learn, and related libraries. Design and build robust data pipelines and workflows leveraging Pandas, SQL, and Kedro. Create clear, reproducible analysis and reports in Jupyter Notebooks. Integrate machine learning models and data pipelines into production environments on AWS. Work with Langchain to build applications leveraging large language models and natural language processing workflows. Collaborate closely with data engineers, product managers, and business stakeholders to understand requirements and deliver impactful solutions. Optimize and monitor model performance in production and drive continuous improvement. Follow best practices for code quality, version control, and documentation. Required Skills and Experience 7+ years of professional experience in Data Science, Machine Learning, or a related field. Strong proficiency in Python and machine learning frameworks, especially scikit-learn. Deep experience working with data manipulation and analysis tools such as Pandas and SQL. Hands-on experience creating and sharing analyses in Jupyter Notebooks. Solid understanding of cloud services, particularly AWS (S3, EC2, Lambda, SageMaker, etc. ). Experience with Kedro for pipeline development and reproducibility. Familiarity with Langchain and building applications leveraging LLMs is a strong plus. Ability to communicate complex technical concepts clearly to non-technical audiences. Strong problem-solving skills and a collaborative mindset. Nice to Have Experience with MLOps tools and practices (model monitoring, CI/CD pipelines for ML). Exposure to other cloud platforms (GCP, Azure). Knowledge of data visualization libraries (e. g. , Matplotlib, Seaborn, Plotly). Familiarity with modern LLM ecosystems and prompt engineering. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive.
Posted 1 week ago
2.0 - 8.0 years
0 - 2 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities Develop, train, and evaluate machine learning models using Python, scikit-learn, and related libraries. Design and build robust data pipelines and workflows leveraging Pandas, SQL, and Kedro. Create clear, reproducible analysis and reports in Jupyter Notebooks. Integrate machine learning models and data pipelines into production environments on AWS. Work with Langchain to build applications leveraging large language models and natural language processing workflows. Collaborate closely with data engineers, product managers, and business stakeholders to understand requirements and deliver impactful solutions. Optimize and monitor model performance in production and drive continuous improvement. Follow best practices for code quality, version control, and documentation. Required Skills and Experience 7+ years of professional experience in Data Science, Machine Learning, or a related field. Strong proficiency in Python and machine learning frameworks, especially scikit-learn. Deep experience working with data manipulation and analysis tools such as Pandas and SQL. Hands-on experience creating and sharing analyses in Jupyter Notebooks. Solid understanding of cloud services, particularly AWS (S3, EC2, Lambda, SageMaker, etc. ). Experience with Kedro for pipeline development and reproducibility. Familiarity with Langchain and building applications leveraging LLMs is a strong plus. Ability to communicate complex technical concepts clearly to non-technical audiences. Strong problem-solving skills and a collaborative mindset. Nice to Have Experience with MLOps tools and practices (model monitoring, CI/CD pipelines for ML). Exposure to other cloud platforms (GCP, Azure). Knowledge of data visualization libraries (e. g. , Matplotlib, Seaborn, Plotly). Familiarity with modern LLM ecosystems and prompt engineering. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive.
Posted 1 week ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 13-17 Lpa Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.
Posted 1 week ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary: We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. Experience: 1 Year Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. Create simulation environments to test scheduling models under different scenarios. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real-world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. Minimum 1 year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Experience with: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma Soft Skills: Strong problem-solving mindset with ability to balance technical depth and business context. Excellent communication and storytelling skills to convey insights to both technical and non-technical stakeholders. Eagerness to learn new tools, technologies, and domain knowledge.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
The Data Visualization Engineer position at Zoetis India Capability Center (ZICC) in Hyderabad offers a unique opportunity to be part of a team that drives transformative advancements in animal healthcare. As a key member of the pharmaceutical R&D team, you will play a crucial role in creating insightful and interactive visualizations to support decision-making in drug discovery, development, and clinical research. Your responsibilities will include designing and developing a variety of visualizations, from interactive dashboards to static visual representations, to summarize key insights from high-throughput screening and clinical trial data. Collaborating closely with cross-functional teams, you will translate complex scientific data into clear visual narratives tailored to technical and non-technical audiences. In this role, you will also be responsible for maintaining and optimizing visualization tools, ensuring alignment with pharmaceutical R&D standards and compliance requirements. Staying updated on the latest trends in visualization technology, you will apply advanced techniques like 3D molecular visualization and predictive modeling visuals to enhance data representation. Working with various stakeholders such as data scientists, bioinformaticians, and clinical researchers, you will integrate, clean, and structure datasets for visualization purposes. Your role will also involve collaborating with Zoetis Tech & Digital teams to ensure seamless integration of IT solutions and alignment with organizational objectives. To excel in this position, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Bioinformatics, or a related field. Experience in the pharmaceutical or biotech sectors will be a strong advantage. Proficiency in visualization tools such as Tableau, Power BI, and programming languages like Python, R, or JavaScript is essential. Additionally, familiarity with data handling tools, omics and network visualization platforms, and dashboarding tools will be beneficial. Soft skills such as strong storytelling ability, effective communication, collaboration with interdisciplinary teams, and analytical thinking are crucial for success in this role. Travel requirements for this full-time position are minimal, ranging from 0-10%. Join us at Zoetis and be part of our journey to pioneer innovation and drive the future of animal healthcare through impactful data visualization.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
As a Python Developer, you will be responsible for building supervised (GLM ensemble techniques) and unsupervised (clustering) models using standard industry libraries such as pandas, scikit-learn, and keras. Your expertise in big data technologies like Spark and Dask, as well as databases including SQL and NoSQL, will be essential in this role. You should have significant experience in Python, with a focus on writing unit tests, creating packages, and developing reusable and maintainable code. Your ability to comprehend and articulate modeling techniques, along with visualizing analytical results using tools like matplotlib, seaborn, plotly, D3, and Tableau, will be crucial. Experience with continuous integration and development tools like Jenkins, as well as Spark ML pipelines, will be advantageous. We are looking for a self-motivated individual who can collaborate effectively with colleagues and contribute innovative ideas to enhance our projects. Preferred qualifications include an advanced degree with a strong foundation in the mathematics behind machine learning, including linear algebra and multivariate calculus. Experience in specialist areas such as reinforcement learning, NLP, Bayesian techniques, or generative models will be a plus. You should excel at presenting ideas and analytical findings in a compelling manner that influences decision-making. Demonstrated evidence of implementing analytical solutions in an industry context and a genuine enthusiasm for leveraging data science to enhance customer-centricity in financial services ethically are highly desirable qualities for this role.,
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
About Calfus: Calfus is a Silicon Valley headquartered software engineering and platforms company with a vision deeply rooted in the Olympic motto "Citius, Altius, Fortius Communiter". At Calfus, we aim to inspire our team to rise faster, higher, and stronger while fostering a collaborative environment to build software at speed and scale. Our primary focus is on creating engineered digital solutions that drive positive impact on business outcomes. Upholding principles of #Equity and #Diversity, we strive to create a diverse ecosystem that extends to the broader society. Join us at #Calfus and embark on an extraordinary journey with us! Position Overview: As a Data Engineer specializing in BI Analytics & DWH, you will be instrumental in crafting and implementing robust business intelligence solutions that empower our organization to make informed, data-driven decisions. Leveraging your expertise in Power BI, Tableau, and ETL processes, you will be responsible for developing scalable architectures and interactive visualizations. This role necessitates a strategic mindset, strong technical acumen, and effective collaboration with stakeholders across all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution aligning with business requirements, utilizing tools like Power BI and Tableau. - Data Integration: Supervise ETL processes through SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Establish and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Employ SQL for crafting intricate queries, stored procedures, and managing data transformations via joins and cursors. - Visualization Development: Spearhead the design of interactive dashboards and reports in Power BI and Tableau while adhering to best practices in data visualization. - Collaboration: Engage closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyze and optimize BI solutions for enhanced performance, scalability, and reliability. - Data Governance: Implement data quality and governance best practices to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts to cultivate a culture of continuous learning and improvement. - Azure Databricks: Utilize Azure Databricks for data processing and analytics to seamlessly integrate with existing BI solutions. Qualifications: - Bachelor's degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong emphasis on Power BI and Tableau. - Proficiency in ETL processes and tools, particularly SSIS. Strong command over SQL Server, encompassing advanced query writing and database management. - Proficient in exploratory data analysis using Python. - Familiarity with the CRISP-DM model. - Ability to work with various data models and databases like Snowflake, Postgres, Redshift, and MongoDB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and Dash. - Strong programming foundation in Python for data manipulation, analysis, serialization, database interaction, data pipeline and ETL tools, cloud services, and more. - Familiarity with Azure SDK is a plus. - Experience with code quality management, version control, collaboration in data engineering projects, and interaction with REST APIs and web scraping tasks is advantageous. Calfus Inc. is an Equal Opportunity Employer.,
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for an Expert Data Scientist to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As an Expert Data Scientist, become a part of a cross-functional development team engineering experiences of tomorrow. Responsibilities: Development of prototype solutions, mathematical models, algorithms, machine learning techniques, and robust analytics to support analytic insights and visualization of complex data sets Work on exploratory data analysis so you can navigate a dataset and come out with broad conclusions based on initial appraisals Provide optimization recommendations that drive KPIs established by product, marketing, operations, PR teams, and others Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability Work directly with business analysts and data engineers to understand and support their use cases Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions Drive innovation by exploring new experimentation methods and statistical techniques that could sharpen or speed up our product decision-making processes Cross-train other team members on technologies being developed, while also continuously learning new technologies from other team members Contribute to the Unit activities and community building, participate in conferences, and provide excellence in exercise and best practices Support marketing & sales activities, customer meetings and digital services through direct support for sales opportunities & providing thought leadership & content creation for the service Requirements: We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit! General technical requirements: BSc, MSc, or PhD in Mathematics, Statistics, Computer Science, Engineering, Operations Research, Econometrics, or related fields Strong knowledge of Probability Theory, Statistics, and a deep understanding of the Mathematics behind Machine Learning Proficiency with CRISP-ML(Q) or TDSP methodologies for addressing commercial problems through data science solutions Hands-on experience with various machine learning techniques, including but not limited to: Regression Classification Clustering Dimensionality reduction Proficiency in Python for developing machine learning models and conducting statistical analyses Strong understanding of data visualization tools and techniques (e.g., Python libraries such as Matplotlib, Seaborn, Plotly, etc.) and the ability to present data effectively Specific technical requirements: Proficiency in SQL for data processing, data manipulation, sampling, and reporting Experience working with imbalanced datasets and applying appropriate techniques Experience with time series data, including preprocessing, feature engineering, and forecasting Experience with outlier detection and anomaly detection Experience working with various data types: text, image, and video data Familiarity with AI/ML cloud implementations (AWS, Azure, GCP) and cloud-based AI/ML services (e.g., Amazon SageMaker, Azure ML) Domain experience: Experience with analyzing medical signals and images Expertise in building predictive models for patient outcomes, disease progression, readmissions, and population health risks Experience in extracting insights from clinical notes, medical literature, and patient-reported data using NLP and text mining techniques Familiarity with survival or time-to-event analysis Expertise in designing and analyzing data from clinical trials or research studies Experience in identifying causal relationships between treatments and outcomes, such as propensity score matching or instrumental variable techniques Understanding of healthcare regulations and standards like HIPAA, GDPR (for healthcare data), and FDA regulations for medical devices and AI in healthcare Expertise in handling sensitive healthcare data in a secure, compliant way, understanding the complexities of patient consent, de-identification, and data sharing Familiarity with decentralized data models such as federated learning to build models without transferring patient data across institutions Knowledge of interoperability standards such as HL7, SNOMED, FHIR, or DICOM Ability to work with clinicians, researchers, health administrators, and policy makers to understand problems and translate data into actionable healthcare insights Good to have skills: Experience with MLOps, including integration of machine learning pipelines into production environments, Docker, and containerization/orchestration (e.g., Kubernetes) Experience in deep learning development using TensorFlow or PyTorch libraries Experience with Large Language Models (LLMs) and Generative AI applications Advanced SQL proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Business-related requirements: Proven experience in developing data science solutions that drive measurable business impact, with a strong track record of end-to-end project execution Ability to effectively translate business problems into data science problems and create solutions from scratch using machine learning and statistical methods Excellent project management and time management skills, with the ability to manage complex, detailed work and effectively communicate progress and results to stakeholders at all levels Desirable: Research experience with peer-reviewed publications Recognized achievements in data science competitions, such as Kaggle Certifications in cloud-based machine learning services (AWS, Azure, GCP) What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Min Experience: 6 years Location: Bengaluru JobType: full-time Requirements We are seeking a highly skilled and experienced Data Scientist - Market Mix Modeling (MMM) to join our analytics team. This role is ideal for someone who thrives on solving complex business problems through data, modeling, and statistical insights. You will be responsible for designing, developing, and deploying robust MMM solutions to help our clients or internal stakeholders optimize marketing spend, improve ROI, and enhance performance forecasting. The ideal candidate will have strong command over statistical modeling, a deep understanding of the marketing ecosystem, and the technical acumen to build both short-term and long-term models using Python and SQL. Key Responsibilities: MMM Model Development: Build and maintain robust short-term and long-term Market Mix Models to analyze the impact of marketing activities across various media channels and drive strategic marketing decisions. Data Processing & Exploration: Collect, clean, and analyze large datasets using Python and SQL to ensure the accuracy and quality of the data pipeline and inputs to the models. Statistical Modeling & Analysis: Apply advanced statistical techniques (e.g., regression analysis, time series analysis, hierarchical models) to identify key drivers of performance and quantify the effectiveness of marketing investments. Budget & Media Optimization: Utilize optimization techniques to simulate various marketing scenarios and recommend optimal media spend allocation strategies across channels, based on business KPIs. Model Diagnostics & Reporting: Perform thorough model diagnostics, validate assumptions, and interpret model outputs to generate actionable and meaningful insights. Present findings clearly to both technical and non-technical stakeholders. Cross-Functional Collaboration: Work closely with marketing, finance, media planning, and leadership teams to align on goals, KPIs, and ensure model insights are implemented in strategic planning processes. Continuous Improvement: Stay current on emerging MMM methodologies, marketing analytics trends, and statistical best practices to continuously evolve the modeling framework and tools. Key Skills & Requirements: 6-8 years of hands-on experience in Market Mix Modeling (MMM). Strong expertise in Python for data manipulation, modeling, and visualization. Proficiency in SQL for querying and data integration. Solid understanding of statistical modeling techniques, including regression, time series, and multicollinearity diagnostics. Experience building both short-term and long-term MMM models. Practical experience in media mix optimization and marketing budget planning. Familiarity with various media channels (TV, digital, print, etc.), campaign planning processes, and how marketing tactics translate to business outcomes. Strong communication skills with the ability to translate data findings into strategic recommendations. Detail-oriented mindset with excellent problem-solving and analytical skills. Nice to Have: Experience working with marketing teams or in advertising/media agencies. Knowledge of machine learning techniques and data visualization libraries like Matplotlib, Seaborn, Plotly, etc. Familiarity with BI tools like Tableau or Power BI
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles & Responsibilities Key Responsibilities Design, develop, and optimize Machine Learning & Deep Learning models using Python and libraries such as TensorFlow, PyTorch, and Scikit-learn Work with Large Language Models (e.g., GPT, BERT, T5) to solve NLP tasks such as, semantic search, summarization, chatbots, conversational agents, and document intelligence. Lead the development of scalable AI solution including data preprocessing, embedding generation, vector search, and prompt orchestration. Build and manage vector databases and metadata stores to support high-performance semantic retrieval and contextual memory. Implement caching, queuing, and background processing systems to ensure performance and reliability at scale. Conduct independent R&D to implement cutting-edge AI methodologies, evaluate open-source innovations, and prototype experimental solutions Apply predictive analytics and statistical techniques to mine actionable insights from structured and unstructured data. Build and maintain robust data pipelines and infrastructure for end-to-end ML model training, testing, and deployment Collaborate with cross-functional teams to integrate AI solutions into business processes Contribute to the MLOps lifecycle, including model versioning, CI/CD, performance monitoring, retraining strategies, and deployment automation Stay updated with the latest developments in AI/ML by reading academic papers, and experimenting with novel tools or frameworks Required Skills & Qualifications Proficient in Python, with hands-on experience in key ML libraries: TensorFlow, PyTorch, Scikit-learn, and HuggingFace Transformers Strong understanding of machine learning fundamentals, deep learning architectures (CNNs, RNNs, transformers), and statistical modeling Practical experience working with and fine-tuning LLMs and foundation models Deep understanding of vector search, embeddings, and semantic retrieval techniques. Expertise in predictive modeling, including regression, classification, time series, clustering, and anomaly detection Comfortable working with large-scale datasets using Pandas, NumPy, SciPy etc. Experience with cloud platforms (AWS, GCP, or Azure) for training and deployment is a plus Preferred Qualifications Master’s or Ph.D. in Computer Science, Machine Learning, Data Science, or related technical discipline. Experience with MLOps tools and workflows (e.g., Docker, Kubernetes, MLflow, SageMaker, Vertex AI). Ability to build and expose APIs for models using FastAPI, Flask, or similar frameworks. Familiarity with data visualization (Matplotlib, Seaborn) and dashboarding (Plotly) tools or equivalent Working knowledge of version control, experiment tracking, and team collaboration Experience 8-11 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): TensorFlow, NLP, Pytorch, Large Language Models (LLM) About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
lululemon is an innovative performance apparel company that specializes in providing high-quality products for yoga, running, training, and other athletic activities. Our focus lies in developing technical fabrics and functional designs to create products and experiences that support individuals in their journey of movement, growth, connection, and overall well-being. At lululemon, we attribute our success to our dedication to innovation, our commitment to our people, and the meaningful relationships we establish within the communities we serve. We are dedicated to driving positive change and fostering a healthier and more prosperous future. A key aspect of our mission involves cultivating an environment that is equitable, inclusive, and growth-oriented for all our team members. Our India Tech Hub is instrumental in enhancing our technological capabilities across various domains such as Product Systems, Merchandising and Planning, Digital Presence, distribution and logistics, and corporate systems. The team in India collaborates closely with our global team on projects of strategic significance. Joining the Content team at lululemon offers an exciting opportunity to contribute to a fast-paced environment that is constantly exploring new initiatives to support our rapidly expanding business. We are a team that embraces cutting-edge technology and is driven by a continuous pursuit of improvement. Innovation is at the core of our ethos, and we encourage each other to step out of our comfort zones and embrace new challenges. Professional and personal growth is paramount to us, and we believe in learning from failures to pave the way for a brighter future. We foster an environment where team members can freely share feedback and ideas, promoting ongoing organizational growth. Operating within an agile methodology, we collaborate with product teams across various functions and a core commerce platform team. At lululemon, we prioritize creating a culture of enjoyment and lightheartedness in our daily work routine. We recognize the strength in unity and celebrate the fact that we are more powerful as a team rather than as individuals. Responsibilities: - Develop Statistical/Machine Learning models/analysis for Merchandising and Planning Business problems - Play a key role in all stages of the Data science project life cycle - Collaborate with Product Management and Business teams to gain Business understanding and collect requirements - Identify necessary data sources and automate the collection process - Conduct pre-processing and exploratory data analysis - Evaluate and interpret results, presenting findings to stakeholders and business leaders - Collaborate with engineering and product development teams to deploy models into production systems when applicable Requirements and Skills: - Demonstrated experience in delivering technical solutions using Time Series/Machine Learning techniques - Proficient in applied Statistical skills, including familiarity with Statistical tests, distributions, etc. - Strong expertise in applied Machine Learning skills such as Time Series, Regression Analysis, Supervised and Unsupervised Learning - Proficient Programming Skills in Python and database query languages like SQL, with additional familiarity with Snowflake & Databricks being advantageous - Experience with time series forecasting techniques like ARIMA, Prophet, Deep AR - Familiarity with data visualization libraries such as Plotly, Business intelligence tools like PowerBI, Tableau - Excellent communication and presentation abilities - Previous experience in the Retail industry Responsibilities: - Identify valuable data sources and automate collection processes - Preprocess structured and unstructured data - Analyze large datasets to identify trends and patterns - Develop predictive models and machine-learning algorithms - Employ ensemble modeling techniques - Utilize data visualization methods to present information effectively - Propose actionable solutions and strategies to address business challenges - Collaborate with engineering and product development teams,
Posted 2 weeks ago
0 years
3 - 7 Lacs
Hyderābād
On-site
Our Company We're Hitachi Vantara, the data foundation trusted by the world's innovators. Our resilient, high-performance data infrastructure means that customers – from banks to theme parks – can focus on achieving the incredible with data. If you've seen the Las Vegas Sphere, you've seen just one example of how we empower businesses to automate, optimize, innovate – and wow their customers. Right now, we're laying the foundation for our next wave of growth. We're looking for people who love being part of a diverse, global team – and who get excited about making a real-world impact with data. The Role We're looking for expert in Advanced NetApp L3 level support, CVO ONTAP, ITIL processes (Change Management, Problem Management) and experience in implementing automation. The role requires advanced troubleshooting skills and the ability to optimize NAS systems for high-performance operations. Office Location: Hyderabad (Work from Office) Key Responsibilities: Lead SQL development: Write and maintain complex queries, stored procedures, triggers, functions, and manage views for high-performance data operations. Database Optimization: Design and implement efficient indexing and partitioning strategies to improve query speed and system performance. Advanced Python Programming: Use Python to build ETL pipelines, automate data workflows, and conduct data wrangling and analysis. Data Modeling & Architecture: Develop scalable data models and work closely with engineering to ensure alignment with database design best practices. Business Insights: Translate large and complex datasets into clear, actionable insights to support strategic business decisions. Cross-functional Collaboration: Partner with product, tech, and business stakeholders to gather requirements and deliver analytics solutions. Mentorship & Review: Guide junior analysts and ensure adherence to coding standards, data quality, and best practices. Visualization & Reporting: Create dashboards and reports using tools like Power BI, Tableau, or Python-based libraries (e.g., Plotly, Matplotlib). Agile & Version Control: Operate in Agile environments with proficiency in Git, JIRA, and continuous integration for data workflows. About us We're a global team of innovators. Together, we harness engineering excellence and passion for insight to co-create meaningful solutions to complex challenges. We turn organizations into data-driven leaders that can a make positive impact on their industries and society. If you believe that innovation can inspire the future, this is the place to fulfil your purpose and achieve your potential. #LI-RS1 Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How we look after you We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We're also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We're always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you'll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. We're proud to say we're an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic. Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success.
Posted 2 weeks ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Job Title: Python Backend Developer - FastAPI, PostgreSQL, Pattern Recognition Location: Remote / Hybrid Type: Full-time Experience: 3 to 7 Years Compensation: USDT only | Based on skill and performance About Us: We are a cutting-edge fintech startup building an Al-powered trading intelligence platform that integrates technical analysis, machine learning, and real-time data processing. Our systems analyze financial markets using custom algorithms to detect patterns, backtest strategies, and deploy automated insights. We're seeking a skilled Python Backend Developer experienced in FastAPI, PostgreSQL, pattern recognition, and financial data workflows. Key Responsibilities Implement detection systems for chart patterns, candlestick patterns, and technical indicators (e.g.,RSI, MACD, EMA) Build and scale high-performance REST APIs using FastAPI for real-time analytics and model communication Develop semi-automated pipelines to label financial datasets for supervised/unsupervised ML models Implement and maintain backtesting engines for trading strategies using Python and custom simulation logic Design and maintain efficient PostgreSQL schemas for storing candle data, trades, indicators, and model metadata (Optional) Contribute to frontend integration using Next.js/React for analytics dashboards and visualizations Key Requirements Python (3-7 years): Strong programming fundamentals, algorithmic thinking, and deep Python ecosystem knowledge FastAPI: Proven experience building scalable REST APIs PostgreSQL: Schema design, indexing, complex queries, and performance optimization Pattern Recognition: Experience in chart/candlestick detection, TA-Lib, rule-based or ML-based identification systems Technical Indicators: Familiarity with RSI, Bollinger Bands, Moving Averages, and other common indicators Backtesting Frameworks: Hands-on experience with custom backtesting engines or libraries like Backtrader, PyAlgoTrade Data Handling: Proficiency in NumPy, Pandas, and dataset preprocessing/labeling techniques Version Control: Git/GitHub - comfortable with collaborative workflows Bonus Skills Experience in building dashboards with Next.js / React Familiarity with Docker, Celery, Redis, Plotly, or TradingView Charting Library Previous work with financial datasets or real-world trading systems Exposure to Al/ML model training, SHAP/LIME explainability tools, or reinforcement learning strategies Ideal Candidate Passionate about financial markets and algorithmic trading systems Thrives in fast-paced, iterative development environments Strong debugging, data validation, and model accuracy improvement skills Collaborative mindset - able to work closely with quants, frontend developers, and ML engineers What You'll Get Opportunity to work on next-gen fintech systems with real trading applications Exposure to advanced AI/ML models and live market environments Competitive salary + performance-based bonuses Flexible working hours in a remote-first team
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
We are looking for a Data Scientist to join our dynamic team dedicated to developing cutting-edge AI-powered search and research tools that are revolutionizing how teams access information and make informed decisions. As a Data Scientist, you will play a crucial role in transforming complex datasets into valuable insights, making an impact at the forefront of productivity and intelligence tool development. Your responsibilities will include owning and managing the data transformation layer using dbt and SQL, designing scalable data models, maintaining business logic, creating intuitive dashboards and visualizations using modern BI tools, collaborating with various teams to uncover key insights, working with diverse structured and unstructured data sources such as Snowflake and MongoDB, and translating business questions into data-driven recommendations. Additionally, you will support experimentation and A/B testing strategies across teams. The ideal candidate for this role will have a minimum of 4-8 years of experience in analytics, data engineering, or BI roles, with strong proficiency in SQL, dbt, and Python (pandas, plotly, etc.). Experience with BI tools, dashboard creation, and working with multiple data sources is essential. Excellent communication skills are a must as you will collaborate across global teams. Familiarity with Snowflake, MongoDB, Airflow, startup experience, or a background in experimentation is considered a bonus. Joining our team means being part of a global effort to redefine enterprise search and research with a clear vision and strong leadership. If you are passionate about solving complex data challenges, enjoy working independently, and thrive in a collaborative environment with brilliant minds, this role offers an exciting opportunity for professional growth and innovation. Location: Abu Dhabi Experience: 4-8 years Role Type: Individual Contributor | Reports to Team Lead (Abu Dhabi),
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a Data Scientist in the Automotive industry, you will be responsible for leveraging your expertise in computer science, Data Science, Statistics, or related fields to drive impactful insights and solutions. With a minimum of 6 years of hands-on experience in data science and analytics, you will bring a deep understanding of various programming tools such as Python and libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch to the table. Your role will involve working with big data tools like Spark, AWS/GCP data pipelines, or similar platforms to extract valuable information from complex datasets. An in-depth understanding of time-series data, signal processing, and Machine Learning for spatiotemporal datasets will be essential for this position. Experience with connected vehicle data or telemetry from trucks/autonomous systems will be highly advantageous in this role. Additionally, familiarity with vehicle dynamics, CAN data decoding, or driver behavior modeling will be considered a plus. Your proficiency in SQL and data visualization tools such as Tableau, PowerBI, Plotly, etc., will enable you to communicate your findings effectively to stakeholders. This position offers a unique opportunity to apply your skills in a remote setting for a contract duration of 12 months. If you are passionate about data science, have a keen eye for detail, and enjoy tackling complex challenges in the Automotive industry, we invite you to apply for this exciting opportunity.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Senior Data Scientist in the Global Data Science & Advanced Analytics team at Colgate-Palmolive, your role will involve leading projects within the Analytics Continuum. You will be responsible for conceptualizing and developing machine learning, predictive modeling, simulations, and optimization solutions to address business questions with clear dollar objectives. Your work will have a significant impact on revenue growth management, price elasticity, promotion analytics, and marketing mix modeling. Your responsibilities will include: - Conceptualizing and building predictive modeling solutions to address business use cases - Applying machine learning and AI algorithms to develop scalable solutions for business deployment - Developing end-to-end business solutions from data extraction to statistical modeling - Conducting model validations and continuous improvement of algorithms - Deploying models using Airflow and Docker on Google Cloud Platforms - Leading pricing, promotion, and marketing mix initiatives from scoping to delivery - Studying large datasets to discover trends and patterns - Presenting insights in a clear and interpretable manner to business teams - Developing visualizations using frameworks like Looker, PyDash, Flask, PlotLy, and streamlit - Collaborating closely with business partners across different geographies To qualify for this position, you should have: - A degree in Computer Science, Information Technology, Business Analytics, Data Science, Economics, or Statistics - 5+ years of experience in building statistical models and deriving insights - Proficiency in Python and SQL for coding and statistical modeling - Hands-on experience with statistical models such as linear regression, random forest, SVM, logistic regression, clustering, and Bayesian regression - Knowledge of GitHub, Airflow, and visualization frameworks - Understanding of Google Cloud and related services like Kubernetes and Cloud Build Preferred qualifications include experience with revenue growth management, pricing, marketing mix models, and third-party data. Knowledge of machine learning techniques and Google Cloud products will be advantageous for this role. Colgate-Palmolive is committed to fostering an inclusive environment where diversity is valued, and every individual is treated with respect. As an Equal Opportunity Employer, we encourage applications from candidates with diverse backgrounds and perspectives. If you require accommodation during the application process due to a disability, please complete the request form provided. Join us in building a brighter, healthier future for all.,
Posted 2 weeks ago
7.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
We are seeking a talented individual to join our Data Science team at Marsh. This role will be based in Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Manager - Data Science (AI - CoE) This role at Knowledge Services aims to provide colleagues with an exposure towards key risks faced by businesses and utilize Artificial intelligence to manage, transfer or mitigate them. Incumbent is expected to be an expert of Generative AI (OpenAI API’s), Prompt Engineering, Data Science algorithms, Business Analytics and Data Manipulation. Incumbent will also have an opportunity to contribute to cutting edge analytics platform and products. Expertise in Python is a must. Experience with Insurance or Banking industry is a plus. This position is for an individual contributor in AI and Data Science, who will develop and implement leading-edge techniques in artificial intelligence, machine learning, predictive modeling, and natural language processing as applied in commercial insurance and risk management. This position consults with senior colleagues on complex financial and statistical analyses, and develops approaches for new, market-leading AI-based tools. We will count on you to Understand and refine the Math and Statistics behind the complex AI models Build AI and machine learning algorithms based on the business ask Keep up pace with new trends in the AI world Mine and analyze data to drive optimization and implement business strategies Deploy API’s on the cloud platforms by understanding the technical ask Develop statistical custom data models and algorithms from scratch to enhance current value proposition and new product development What you need to have: Develops solutions based on Large Language Models, Advanced transformers, Advanced deep learning/Neural networks models Expertise in image based advanced models, Object detection, Video/Image processing, multimodal RAG’s and other image and video models Develops and deploys API based business solutions on AWS platforms Coordinates with technology colleagues within and outside the team for support, manage, enhance and retrains deployed models Keeps pace with the fast-evolving world of AI, large language models and other type of AI solutions in the market and able to apply research level knowledge to benefit Marsh advisory businesses Designs, develops and contributes to Data architecture for complex enterprise level applications Deploys solutions on AWS cloud platforms Understands business problems to create an approach that starts with determining structured and unstructured data needs and availability, builds AI/ML models, and finalizes with results that unlock insight for clients/colleagues, deploy these solutions on web framework and enterprise wide platforms Demonstrates skill in advanced statistical analysis, data mining, and/or research techniques, combined with broader awareness of the business and ongoing research, while functioning in a collaborative role with the Data Science team and across the wider organization Stays current with ongoing research in the field and brings new approaches to the team Coaches team members on the delivery of AI-based tools and analyses and presentation of findings Master’s Degree in Engineering, Computer Science, Data Science or related field 7+ years of experience in the field of Data Science, AI research or similar fields Expert Data Scientist, including expertise with AI and machine learning model building Ability to influence others within and outside of the job function regarding approach and procedures Superior detail orientation, excellent communication and interpersonal skills Expertise of modern programming languages such as Python, including NumPy, Keras, TensorFlow, SQL, and PyTorch etc Familiar with Hugging face and is library of models What makes you stand out? Building, training and deploying advanced models from the scratch to the production Business impact with AI models Understanding of insurance and risk management Knowledge and demonstrated experience of advance AI concepts like training language, voice, image or computer vision models Experience of building data visualization in Plotly/Dash/D3.js Hands on experience with stochastic or econometric modelling Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Marsh, a business of Marsh McLennan (NYSE: MMC), is the world’s top insurance broker and risk advisor. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marsh.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_313481
Posted 2 weeks ago
2.0 years
0 Lacs
Delhi, India
On-site
About Us Bain & Company is a global consultancy that helps the world’s most ambitious change makers define the future. Across 65 offices in 40 countries, we work alongside our clients as one team with a shared ambition to achieve extraordinary results, outperform the competition and redefine industries. Since our founding in 1973, we have measured our success by the success of our clients, and we proudly maintain the highest level of client advocacy in the industry. In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center (BCC) in New Delhi. The BCC is now known as BCN (Bain Capability Network) with its nodes across various geographies. BCN is an integral and largest unit of (ECD) Expert Client Delivery. ECD plays a critical role as it adds value to Bain's case teams globally by supporting them with analytics and research solutioning across all industries, specific domains for corporate cases, client development, private equity diligence or Bain intellectual property. The BCN comprises of Consulting Services, Knowledge Services and Shared Services. Who You Will Work With The Consumer Products Center of Expertise collaborates with Bain’s global Consumer Products Practice leadership, client-facing Bain leadership and teams, and with end clients on development and delivery of Bain’s proprietary CP products and solutions. These solutions aim to answer strategic questions of Bain’s CP clients relating to brand strategy (consumer needs, assortment, pricing, distribution), revenue growth management (pricing strategy, promotions, profit pools, trade terms), negotiation strategy with key retailers, optimization of COGS etc. You will work as part of the team in CP CoE comprising of a mix of Director, Managers, Projects Leads, Associates and Analysts working to implement cloud-based end-to-end advanced analytics solutions. Delivery models on projects vary from working as part of a CP Center of Expertise, broader global Bain case team within the CP ringfence, or within other industry CoEs such as FS / Retail / TMT / Energy / CME / etc with BCN on need basis The AS is expected to have a knack for seeking out challenging problems and coming up with their own ideas, which they will be encouraged to brainstorm with their peers and managers. They should be willing to learn new techniques and be open to solving problems with an interdisciplinary approach. They must have excellent coding skills and should demonstrate a willingness to write modular, reusable, and functional code. What You’ll Do Collaborate with data scientists working with Python, LLMs, NLP, and Generative AI to design, fine-tune, and deploy intelligent agents and chains-based applications. Develop and maintain front-end interfaces for AI and data science applications using React.js / Angular / Nextjs and/or Streamlit/ DASH, enhancing user interaction with complex machine learning and NLP-driven systems. Build and integrate Python-based machine learning models with backend systems via RESTful APIs using frameworks like FastAPI / Flask or Django. Translate complex business problems into scalable technical solutions, integrating AI capabilities with robust backend and frontend systems. Assist in the design and implementation of scalable data pipelines and ETL workflows using DBT, PySpark, and SQL, supporting both analytics and generative AI solutions. Leverage containerization tools like Docker and utilize Git for version control, ensuring code modularity, maintainability, and collaborative development. Deploy ML-powered and data-driven applications on cloud platforms such as AWS or Azure, optimizing for performance, scalability, and cost-efficiency. Contribute to internal AI/ML Ops platforms and tools, streamlining model deployment, monitoring, and lifecycle management. Create dashboards, visualizations, and presentations using tools like Tableau/ PowerBI, Plotly, and Seaborn to drive business insights. Proficient with Excel, and PowerPoint by showing proficiency in business communication through stakeholder interactions. About You A Master’s degree or higher in Computer Science, Data Science, Engineering, or related fields OR Bachelor's candidates with relevant industry experience will also be considered. Proven experience (2 years for Master’s; 3+ years for Bachelor’s) in AI/ML, software development, and data engineering. Solid understanding of LLMs, NLP, Generative AI, chains, agents, and model fine-tuning methodologies. Proficiency in Python, with experience using libraries such as Pandas, Numpy, Plotly, and Seaborn for data manipulation and visualization. Experience working with modern Python frameworks such as FastAPI for backend API development. Frontend development skills using HTML, CSS, JavaScript/TypeScript, and modern frameworks like React.js; Streamlit knowledge is a plus. Strong grasp of data engineering concepts – including ETL pipelines, batch processing using DBT and PySpark, and working with relational databases like PostgreSQL, Snowflake etc. Good working knowledge of cloud infrastructure (AWS and/or Azure) and deployment best practices. Familiarity with MLOps/AI Ops tools and workflows including CI/CD pipelines, monitoring, and container orchestration (with Docker and Kubernetes). Good-to-have: Experience in BI tools such as Tableau or PowerBI, Good-to-have: Prior exposure to consulting projects or CP (Consumer Products) business domain. What Makes Us a Great Place To Work We are proud to be consistently recognized as one of the world's best places to work, a champion of diversity and a model of social responsibility. We are currently ranked the #1 consulting firm on Glassdoor’s Best Places to Work list, and we have maintained a spot in the top four on Glassdoor's list for the last 12 years. We believe that diversity, inclusion and collaboration is key to building extraordinary teams. We hire people with exceptional talents, abilities and potential, then create an environment where you can become the best version of yourself and thrive both professionally and personally. We are publicly recognized by external parties such as Fortune, Vault, Mogul, Working Mother, Glassdoor and the Human Rights Campaign for being a great place to work for diversity and inclusion, women, LGBTQ and parents.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hiring for - Senior AI Engineer Location - Pune Joining time - 30 days Education : Degree or Master's degree in Computer Science/ Electronics/ Artificial Intelligence Experience: Minimum 5 years of experience in AI Experience performing quantitative research with Computer vision, Machine learning, Deep learning, and associated implementation of algorithms Strong programming skills in Python and familiarity with other languages like R Experience data analysis techniques Working experience Pandas and other AI related libraries Strong understanding of Supervised, Unsupervised learning, and Deep Learning algorithms Strong understanding of Azure DevOps and CI/CD pipeline creation for AI projects Experience in creating dashboards or webpage using Plotly Dash or similar tools Understanding of MLOps practices for deploying and maintaining ML models in production Experience in deploying Generative AI models R & R : As R&D Performance Senior AI Engineer , you will study and develop artificial intelligence programs and deploy in the system Using your logical and communication skills to not only suggest ML approaches, techniques, software or applications, but develop and deploy them worldwide with in our organization This role needs innovative thinking to develop AI based automations with expected accuracy, need out of box thinking With the objective to secure company know-how and capitalize from it, to bring value to our business Problem solving and logical thinking will be key to success in this position
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our Company We’re Hitachi Vantara, the data foundation trusted by the world’s innovators. Our resilient, high-performance data infrastructure means that customers – from banks to theme parks – can focus on achieving the incredible with data. If you’ve seen the Las Vegas Sphere, you’ve seen just one example of how we empower businesses to automate, optimize, innovate – and wow their customers. Right now, we’re laying the foundation for our next wave of growth. We’re looking for people who love being part of a diverse, global team – and who get excited about making a real-world impact with data. The Role We’re looking for expert in Advanced NetApp L3 level support, CVO ONTAP, ITIL processes (Change Management, Problem Management) and experience in implementing automation. The role requires advanced troubleshooting skills and the ability to optimize NAS systems for high-performance operations. Office Location: Hyderabad (Work from Office) Key Responsibilities Lead SQL development: Write and maintain complex queries, stored procedures, triggers, functions, and manage views for high-performance data operations. Database Optimization: Design and implement efficient indexing and partitioning strategies to improve query speed and system performance. Advanced Python Programming: Use Python to build ETL pipelines, automate data workflows, and conduct data wrangling and analysis. Data Modeling & Architecture: Develop scalable data models and work closely with engineering to ensure alignment with database design best practices. Business Insights: Translate large and complex datasets into clear, actionable insights to support strategic business decisions. Cross-functional Collaboration: Partner with product, tech, and business stakeholders to gather requirements and deliver analytics solutions. Mentorship & Review: Guide junior analysts and ensure adherence to coding standards, data quality, and best practices. Visualization & Reporting: Create dashboards and reports using tools like Power BI, Tableau, or Python-based libraries (e.g., Plotly, Matplotlib). Agile & Version Control: Operate in Agile environments with proficiency in Git, JIRA, and continuous integration for data workflows. About Us We’re a global team of innovators. Together, we harness engineering excellence and passion for insight to co-create meaningful solutions to complex challenges. We turn organizations into data-driven leaders that can a make positive impact on their industries and society. If you believe that innovation can inspire the future, this is the place to fulfil your purpose and achieve your potential. Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How We Look After You We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We’re also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We’re always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you’ll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. We’re proud to say we’re an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic. Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough