Jobs
Interviews

4767 Numpy Jobs - Page 40

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

17 - 22 Lacs

Noida

Work from Office

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way youd like, where youll be supported and inspired by a collaborative community of colleagues around the world, and where you ll be able to reimagine what s possible. Join us and help the world s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. About The Role Design, develop, and implement conversational AI solutions using the Kore.ai platform, including chatbots, virtual assistants, and automated workflows tailored to business requirements. Customize, configure, and optimize the Kore.ai platform to address specific business needs, focusing on user interface (UI) design, dialogue flows, and task automation. Collaborate with cross-functional teams to understand business objectives and translate them into scalable AI solutions. Continuously monitor and enhance bot performance by staying updated on the latest Kore.ai features, AI advancements, and machine learning trends. Primary Skills 7+ years of experience in software development, with at least 4+ years of hands-on experience working with the Kore.ai platform. Proven expertise in developing chatbots and virtual assistants using Kore.ai tools. Proficiency in programming languages such as JavaScript or other scripting languages. Experience with API integrations, including RESTful APIs and third-party services. Strong understanding of dialogue flow design, intent recognition, and context management. Secondary Skills Kore.ai platform certification is a strong advantage. Knowledge of automation tools and RPA technologies is beneficial. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

4 - 8 Lacs

Bengaluru

Work from Office

About The Role Analyze and transform data science prototypes Develop ML applications as per requirements Run machine learning tests and experiments Perform analysis and fine-tuning of models using test results Train and retrain ML models when necessary Primary Skills Experience working on Cloud platforms, preferably AWS Proven experience as Machine Learning Engineer Experience in AWS Sagemaker Understanding of data structures, data modeling and software architecture Ability to write robust code in Python Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn) Excellent communication skills

Posted 3 weeks ago

Apply

5.0 years

17 - 18 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Software Engineer Practitioner – Full Stack Data Engineer Location: Chennai (Hybrid)-34305 Type: Full-time Compensation: Up to ₹18 LPA About The Role We are seeking a seasoned Full Stack Data Engineer to join our Enterprise Data Platform team. This role is crucial in designing, building, and optimizing scalable data pipelines on the Google Cloud Platform (GCP) , using native tools such as BigQuery, Dataform, Dataflow, and Pub/Sub . You will ensure best practices in data governance, security, and performance while collaborating closely with cross-functional teams. This is a high-impact opportunity to influence Ford’s data engineering architecture and contribute to the company’s digital transformation. Key Responsibilities Design, develop, and maintain robust, scalable data pipelines on GCP using BigQuery, Dataform, Dataflow, and Pub/Sub. Collaborate with data engineering, architecture, and product teams to build data models, solutions, and automation. Ensure data governance, security, auditability, and high performance across all pipelines. Build custom cloud-native solutions leveraging tools like Data Fusion, Airflow, and Terraform. Optimize data transformation workflows using Python (NumPy, Pandas, PySpark, etc.) and SQL. Engage with stakeholders to understand data requirements and translate them into scalable engineering solutions. Participate in Agile/Scrum processes, including writing user stories and contributing to sprint planning. Drive the adoption of best practices in data warehousing, data lake design, and DevOps. Must-Have Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5+ years of experience in full stack data engineering and database performance optimization. Strong command of GCP native tools: BigQuery, Dataform, Dataflow, Pub/Sub, Data Fusion. Proficient in Python, Java, SQL, and data orchestration tools like Airflow. Experience with Terraform, Tekton, and version control (e.g., Git). Solid understanding of data architecture, ETL/ELT pipelines, and data governance. Deep familiarity with Agile methodology, DevOps, and collaborative product development. Excellent communication skills and stakeholder engagement experience. Preferred Skills (Nice To Have) Experience with PostgreSQL, Dataproc, Cloud SQL, and containerization tools. Knowledge of industrial or enterprise data products and real-time data streaming. Experience working in a regulated or large enterprise environment. Additional Information Interact with internal data and analytics product lines to identify technical opportunities. Influence design standards and ensure reusability and scalability of developed components. Support process improvements across data delivery, curation, and analytics operations. Skills: dataflow,python,java,gcp,etl

Posted 3 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Designation: AI/ML Developer Location: Ahmedabad Department: Technical Job Summary We are looking enthusiastic AI/ML Developer with 2 to 3 years of relevant experience in machine learning and artificial intelligence. The candidate should be well-versed in designing and developing intelligent systems and have a solid grasp of data handling and model deployment. Key Responsibilities Develop and implement machine learning models tailored to business needs. Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models, VAEs) using platforms like Hugging Face, LangChain, or OpenAI. Conduct data collection, cleaning, and pre-processing for model readiness. Train, test, and optimize models to improve accuracy and performance. Work closely with cross-functional teams to deploy AI models in production environments. Perform data exploration, visualization, and feature selection Stay up-to-date with the latest trends in AI/ML and experiment with new approaches. Design and implement Multi-Agent Systems (MAS) for distributed intelligence, autonomous collaboration, or decision-making. Integrate and orchestrate agentic workflows using tools like Agno, CrewAI or LangGraph. Ensure scalability and efficiency of deployed solutions. Monitor model performance and perform necessary updates or retraining. Requirements Strong programming skills in Python and experience with libraries like Tensor Flow, PyTorch, Scikit-learn, and Keras. Experience working with vector databases (Pinecone, Weaviate, Chroma) for RAG systems. Good understanding of machine learning concepts, including classification, regression, clustering, and deep learning. Knowledge of knowledge graphs, semantic search, or symbolic reasoning. Proficiency in working with tools such as Pandas, NumPy, and data visualization libraries. Hands-on experience deploying models using REST APIs with frameworks like Flask or FastAPI. Familiarity with cloud platforms (AWS, Google Cloud, or Azure) for ML deployment. Knowledge of version control systems like Git. Experience with Natural Language Processing (NLP), computer vision, or predictive analytics. Exposure to MLOps tools and workflows (e.g., MLflow, Kubeflow, Airflow). Basic familiarity with big data frameworks like Apache Spark or Hadoop. Understanding of data pipelines and ETL processes. What We Offer Opportunity to work on live projects and client interactions. A vibrant and learning-driven work culture. 5 days a week & Flexible work timings.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

4 Lacs

India

On-site

About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications , including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments , both in the cloud and on edge devices . Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: Face detection and recognition Object/person detection and tracking Intrusion and anomaly detection Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection , zone-based event alerts , person re-identification , and multi-camera coordination . Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX , TensorRT , or OpenVINO for real-time inference. Build and deploy APIs using FastAPI , Flask , or TorchServe . Package applications using Docker and orchestrate deployments with Kubernetes . Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus , Grafana , and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC . As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3–6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning . Hands-on experience with: Deep learning frameworks: PyTorch, TensorFlow Image/video processing: OpenCV, NumPy Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier , Intel NCS2 , or Coral Edge TPU . Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Languages & AI - Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving - FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization - ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment - Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps - GitHub Actions, Jenkins, GitLab CI Cloud & Edge - AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring - Prometheus, Grafana, ELK Stack, Sentry Annotation Tools - LabelImg, CVAT, Supervisely Benefits: Competitive compensation and performance-linked incentives. Work on cutting-edge surveillance and AI projects. Friendly and innovative work culture. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025

Posted 3 weeks ago

Apply

0 years

4 - 7 Lacs

Surat

On-site

Job description Primary role Writing efficient, reusable, testable, and scalable code. Developing - backend components to enhance performance and receptiveness, server - side logic and platform, statistical learning models. Integrate user-facing elements into applications. Improve functionality of existing systems. Working with python libraries like pandas, Numpy, etc. Creating models for AI and ML - based features. Coordinate with internal teams to understand user requirements and provide technical solutions. Job Overview (7454) Experience 30 Month(s). City Surat. Qualification M.SC,MCA,PGDCA Area of Expertise PYTHON Prefer Gender Male Function AI & ML Audio / Video Profile NA

Posted 3 weeks ago

Apply

4.0 years

8 - 9 Lacs

Noida

On-site

Location: Noida / Gurgaon (Onsite – 5 Days a Week) Experience: 4 to 7 Years Employment Type: Full-Time Background Verification (BGV) : Mandatory post-selection About the Role We are looking for highly skilled Python Developers who are passionate about building scalable and reliable backend solutions. You will work in a dynamic, collaborative environment and contribute to robust system architecture, core development, and feature optimization using modern Python frameworks, libraries, and tools . Key Responsibilities Design, develop, test, and maintain backend components using Python and modern frameworks. Apply strong knowledge of OOPs concepts , data structures , and algorithms to write clean, efficient, and scalable code. Work with Python libraries such as NumPy, Pandas, SciPy, and Scikit-learn to build data-driven solutions. Collaborate with DevOps teams for seamless deployment using Docker and CI/CD pipelines . Work with MySQL databases to design schema, write queries, and ensure data integrity. Version control and code management using Git . Collaborate with cross-functional teams, participate in code reviews, and contribute to agile development. Job Type: Full-time Pay: ₹70,000.00 - ₹80,000.00 per month Location Type: In-person Schedule: Day shift Experience: Python: 5 years (Required) SQL: 3 years (Required) Git: 1 year (Required) Location: Noida, Uttar Pradesh (Required) Work Location: In person Speak with the employer +91 8851582342

Posted 3 weeks ago

Apply

0 years

1 - 6 Lacs

Noida

On-site

As Python Developer, you'll contribute to the core analytics engine that powers portfolio scoring and optimization. If you're strong in Python and love working with numbers, dataframes, and algorithms—this is the role for you. Key Responsibilities - Build and maintain internal Python libraries for scoring, data ingestion, normalization, and transformation. - Collaborate on the core computation engine, dealing with Return, Safety, Income scoring algorithms. - Process broker portfolio data and validate datasets using clean, modular Python code. - Contribute to writing tests, CI scripts, and engine documentation. - Participate in internal design discussions and implement enhancements to existing libraries and engines. - Financial data set like factser and bloomberg to build quant model for USA market Must-Have Skills - Proficiency in Python 3.x, with attention to clean, testable code. - Hands-on experience with NumPy and pandas for numerical/data processing. - Understanding of basic ETL concepts and working with structured datasets. - Familiarity with Git, unit testing, logging, and code organization principles. - Strong problem-solving ability and willingness to learn new technologies fast. Good to Have (Bonus Points) - Experience with SQL and writing efficient queries for analytics use-cases. - Exposure to ClickHouse or other columnar databases optimized for fast OLAP workloads. - Familiarity with data validation tools like pydantic or type systems like mypy. - Knowledge of Python packaging tools (setuptools, pyproject.toml). - Experience with Apache Arrow, Polars, FastAPI, or SQLAlchemy. - Exposure to async processing (Celery, asyncio), Docker, or Kubernetes. What You’ll Gain - Work on the core decision-making engine that directly impacts investor outcomes. - Be part of a small, high-quality engineering team focused on clean, impactful systems. - Mentorship and learning in data engineering, financial analytics, and production-quality backend systems. - Growth path into specialized tracks: backend, data engineering, or system architecture. About the Stack While your main focus will be Python, you'll also interact with services that consume or publish to APIs, data stores like ClickHouse and PostgreSQL, and real-time processing queues. Job Type: Full-time Pay: ₹10,000.00 - ₹50,000.00 per month Location Type: In-person Work Location: In person

Posted 3 weeks ago

Apply

1.0 - 3.0 years

3 - 10 Lacs

Calcutta

Remote

Job Title: Data Scientist / MLOps Engineer (Python, PostgreSQL, MSSQL) Location: Kolkata (Must) Employment Type: Full-Time Experience Level: 1–3 Years About Us: We are seeking a highly motivated and technically strong Data Scientist / MLOps Engineer to join our growing AI & ML team. This role involves the design, development, and deployment of scalable machine learning solutions, with a strong focus on operational excellence, data engineering, and GenAI integration. Key Responsibilities: Build and maintain scalable machine learning pipelines using Python. Deploy and monitor models using MLFlow and MLOps stacks. Design and implement data workflows using standard python libraries such as PySpark. Leverage standard data science libraries (scikit-learn, pandas, numpy, matplotlib, etc.) for model development and evaluation. Work with GenAI technologies, including Azure OpenAI and other open source models, for innovative ML applications. Collaborate closely with cross-functional teams to meet business objectives. Handle multiple ML projects simultaneously with robust branching expertise. Must-Have Qualifications: Expertise in Python for data science and backend development. Solid experience with PostgreSQL and MSSQL databases. Hands-on experience with standard data science packages such as Scikit-Learn, Pandas, Numpy, Matplotlib. Experience working with Databricks , MLFlow , and Azure . Strong understanding of MLOps frameworks and deployment automation. Prior exposure to FastAPI and GenAI tools like Langchain or Azure OpenAI is a big plus. Preferred Qualifications: Experience in the Finance, Legal or Regulatory domain. Working knowledge of clustering algorithms and forecasting techniques. Previous experience in developing reusable AI frameworks or productized ML solutions. Education: B.Tech in Computer Science, Data Science, Mechanical Engineering, or a related field. Why Join Us? Work on cutting-edge ML and GenAI projects. Be part of a collaborative and forward-thinking team. Opportunity for rapid growth and technical leadership. Job Type: Full-time Pay: ₹344,590.33 - ₹1,050,111.38 per year Benefits: Leave encashment Paid sick time Paid time off Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 3 years (Required) ML: 2 years (Required) Location: Kolkata, West Bengal (Required) Work Location: In person Application Deadline: 02/08/2025 Expected Start Date: 04/08/2025

Posted 3 weeks ago

Apply

1.0 years

2 - 6 Lacs

India

On-site

Key Responsibilities: Develop Python Code: Write efficient, maintainable code for AI/ML algorithms and systems. Model Development: Design, train, and deploy machine learning models (supervised, unsupervised, deep learning). Data Processing: Clean, preprocess, and analyze large datasets. Collaboration: Work with cross-functional teams to integrate AI/ML models into products. Optimization: Fine-tune models for performance, accuracy, and scalability. Required Skills: Strong Python programming skills, with experience in libraries like NumPy, pandas, scikit-learn, TensorFlow, or Py Torch. Hands-on experience in AI/ML model development and deployment. Knowledge of data preprocessing, feature engineering, and model evaluation. Familiarity with cloud platforms (AWS, Google Cloud, Azure) and version control (Git). Degree in Computer Science, Data Science, or a related field. Job Type: Full-time Pay: ₹20,000.00 - ₹50,000.00 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore Pardesipura, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) Work Location: In person

Posted 3 weeks ago

Apply

5.0 - 7.0 years

10 - 15 Lacs

Chennai

Work from Office

Role & responsibilities We are seeking an experienced Django Python API Developer who combines hands-on technical expertise with strong leadership and mentoring capabilities. You will lead and develop a team of engineers, take ownership of end-to-end delivery, and ensure high-quality, scalable solutions in compliance with industry and government standards. Key Responsibilities - Lead API development with Python, Django, and Django REST Framework. - Architect scalable backend services; design and enforce API standards (OpenAPI/Swagger). - Implement and oversee deployment pipelines (Jenkins, GitHub Actions) and container orchestration (Docker, Kubernetes). - Provision and manage infrastructure using Terraform, CloudFormation, or ARM templates. - Mentor backend team: code reviews, pair programming, and technical workshops. - Collaborate with frontend leads, QA, and operations to ensure end-to-end delivery. Required Technical Skills - Python 3.7+, Django, Django REST Framework. - PostgreSQL, MySQL, or MongoDB performance tuning. - CI/CD (Jenkins, GitHub Actions, Azure DevOps); Docker, Kubernetes; Terraform or CloudFormation. - Security and compliance (OWASP, GIGW guidelines). Preferred candidate profile Preferred Experience - Experience leading e-Governance or public-sector initiatives. - Working knowledge of message brokers (RabbitMQ, Kafka) and serverless architectures. - Cloud deployments and observability solutions. Soft Skills & Attributes - Leadership & Mentorship: Demonstrated ability to lead and develop backend teams. - Hands-On Approach: Deep involvement in coding, architecture, and deployment. - Ownership & Accountability: Ensures reliable, high-quality API services. - Communication: Articulates technical vision and aligns stakeholders

Posted 3 weeks ago

Apply

1.5 - 2.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Dataiku exp of at least 1.5-2 years. Good to know on creating and handling partitioned dataset in Dataiku. Good with Python, data handling using pandas, numpy (Pandas and numpy are must and must know it in depth) and basics of regex. Should be able to work on GCP big query and use terraform as base for managing the code changes.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Ahmedabad

Work from Office

We are seeking a skilled Python Developer to join our team. The ideal candidate will be responsible for working with existing APIs or developing new APIs based on our requirements. You should have a strong foundation in Python and experience with RESTful services and cloud infrastructure. Requirements: Strong understanding of Python Experience with RESTful services and cloud infrastructure Ability to develop microservices/functions Familiarity with libraries such as Pandas, NumPy, Matplotlib & Seaborn, Scikit-learn, Flask , Django, Requests, FastAPI and TensorFlow & PyTorch. Basic understanding of SQL and databases Ability to write clean, maintainable code Experience deploying applications at scale in production environments Experience with web scraping using tools like BeautifulSoup, Scrapy, or Selenium Knowledge of equities, futures, or options microstructures is a plus Experience with data visualization and dashboard building is a plus Why Join Us? Opportunity to work on high-impact real-world projects Exposure to cutting-edge technologies and financial datasets A collaborative, supportive, and learning-focused team culture 5-day work week (Monday to Friday)

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 15 Lacs

Pune

Work from Office

Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Company Description NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. Job Description Write complex algorithms to get an optimal solution for real time problems Qualitative analysis and data mining to extract data, discover hidden patterns, and develop predictive models based on findings Developing processes to extract, transform and load data Use distributed computing to validate and process large volumes of data to deliver insights Evaluate technologies we can leverage, including open-source frameworks, libraries, and tools Interface with product and other engineering teams on a regular cadence Qualifications 3+ years of applicable data engineering experience, including Python & RESTful APIs In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Strong fundamentals in data mining & data processing methodologies Strong knowledge of data structures, algorithms and designing for performance, scalability and availability Sound understanding of Big Data & RDBMS technologies, such as SQL, Hive, Spark, Databricks, Snowflake or Postgresql Orchestration and messaging frameworks: Airflow Good experience working with Azure cloud platform Good experience working in containerization framework, Docker is a plus. Experience in agile software development practices and DevOps is a plus Knowledge of and Experience with Kubernetes is a plus Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.E. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 3 weeks ago

Apply

5.0 - 10.0 years

30 - 45 Lacs

Hyderabad, Chennai

Hybrid

Salary : 30 to 45 LPA Exp: 6 to 10 years Location :Hyderabad (Hybrid) Notice: Immediate to 30 days..!! Roles & responsibilities: 5+ years exp on Python , ML and Banking model development Interact with the client to understand their requirements and communicate / brainstorm solutions, model Development: Design, build, and implement credit risk model. Contribute to how analytical approach is structured for specification of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems 5+ years exp on ML/Python (predictive modelling) . Design, implement, test, deploy and maintain innovative data and machine learning solutions to accelerate our business. Create experiments and prototype implementations of new learning algorithms and prediction techniques Collaborate with product managers, and stockholders to design and implement software solutions for science problems Use machine learning best practices to ensure a high standard of quality for all of the team deliverables Has experience working on unstructured data ( text ): Text cleaning, TFIDF, text vectorization Hands-on experience with IFRS 9 models and regulations. Data Analysis: Analyze large datasets to identify trends and risk factors, ensuring data quality and integrity. Statistical Analysis: Utilize advanced statistical methods to build robust models, leveraging expertise in R programming. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to align models with business needs. Continuous Improvement: Stay updated with the latest methodologies and tools in credit risk modeling and R programming.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

10 - 17 Lacs

Hyderabad

Hybrid

Summary: Seeking a highly skilled and experienced Senior Python Engineer who will be responsible for designing, developing, and maintaining high-quality, scalable, and reliable software solutions. Experience: • 8+ years of professional experience in Python software development. • Proven experience in designing and developing scalable and reliable applications. • Experience with Agile development methodologies. • Technical Skills: • Strong proficiency in Python and related frameworks (e.g., Django, Flask, FastAPI). • Solid understanding of object-oriented programming principles and design patterns. • Experience with relational databases (e.g., Microsoft SQL , MySQL) • Experience with cloud platforms (e.g. Azure, GCP). • Experience with containerization technologies (e.g. Docker, Kubernetes). • Experience with testing frameworks (e.g., pytest, unittest). • Experience with CI/CD pipelines (e.g., Jenkins, GitLab) • Familiarity with Linux/Unix environments.

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

India

Remote

Location: Remote (India) Type: Full-Time Experience: 0–1 year Industry: Artificial Intelligence / Data Science / Tech About Us: We’re a fast-moving, AI-native startup building scalable intelligent systems that harness the power of data to solve real-world problems. From LLMs to predictive modeling, we thrive on transforming raw information into actionable intelligence using the latest in machine learning and data automation. Role Overview: We’re hiring a Junior Data Analyst to join our remote team. You'll collaborate with data scientists, ML engineers, and product teams to clean, analyze, and structure datasets that power next-gen AI products. If you love data, patterns, and productivity hacks using AI tools — this is your chance to break into the AI industry. Key Responsibilities: Clean, preprocess, and organize large volumes of structured and unstructured data Conduct exploratory data analysis (EDA) to uncover trends, patterns, and insights Support feature engineering and contribute to AI/ML model preparation Develop dashboards, reports, and visualizations using Power BI, Tableau, Seaborn, etc. Use tools like Python, SQL, Excel , and AI assistants to streamline repetitive tasks Collaborate cross-functionally to support data-driven decision-making Maintain documentation for data workflows and ensure data integrity Tech Stack & Tools You'll Work With: Languages: Python (Pandas, NumPy), SQL, R (optional) Data Tools: Jupyter, Excel/Google Sheets, BigQuery, Snowflake (optional) Visualization: Power BI, Tableau, matplotlib, Seaborn, Plotly Productivity + AI Tools: Gemini CLI , Claude Code , Cursor , ChatGPT Code Interpreter , etc. Project Tools: GitHub, Notion, Slack, Jira You’re a Great Fit If You Have: A strong analytical mindset and curiosity for patterns in data Solid foundation in data cleaning, transformation, and EDA Basic understanding of databases and querying using SQL Familiarity with at least one scripting language (preferably Python) Interest in AI and how data powers intelligent systems Bonus: Experience with AI programming assistants or interest in using them Requirements: Bachelor’s degree in Data Science, Statistics, Computer Science, Mathematics , or related field 0–1 year of professional experience in a data-focused role Strong communication skills and ability to work independently in a remote setup Based in India with reliable internet access and a good work-from-home environment What You’ll Get: Work at the intersection of data analytics and AI Remote-first flexibility and asynchronous work culture Mentorship from experienced data scientists and AI engineers Exposure to real-world AI projects and best practices Access to premium AI productivity tools and training resources Opportunity to grow into senior analytics or data science roles

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

This role is for one of AccioJob’s hiring partners: Global Consulting and Services CTC: ₹3 – ₹6 LPA Job Title: Python Automation Engineer Location: Pune (Onsite) Job Type: Full-Time Eligibility Criteria: Degrees: B.Tech / B.E, M.Tech / M.E, BCA, MCA, B.Sc, M.Sc Branches: All Graduation Year: 2023, 2024, 2025 The Role: As a Python Automation Engineer, you will work on automating workflows, data handling, and analytical processes using Python and related libraries. This role involves writing efficient scripts, working with large datasets, and streamlining operations in collaboration with cross-functional teams. Key Responsibilities: Automate tasks and data pipelines using Python Work with Excel files and perform data manipulation using Pandas and NumPy Write and optimize SQL queries for data retrieval and reporting Collaborate with teams to understand automation requirements and deliver scalable solutions Ensure code quality and maintain documentation for automated processes What You’ll Bring: Proficiency in Python , especially for scripting and automation Strong understanding of Pandas and NumPy for data handling Hands-on experience with Excel and data formatting tasks Basic to intermediate knowledge of SQL Strong logical thinking and problem-solving abilities Ability to work independently and meet deadlines Evaluation Process: Round 1: Offline Assessment at AccioJob Pune Centre Further Rounds (For shortlisted candidates): Profile & Background Screening Round Technical Interview 1 Technical Interview 2 Tech + Managerial Round Note: Candidates are required to bring their laptop and earphones for the assessment. Skills Required: Excel, Python, Pandas, NumPy, SQL

Posted 3 weeks ago

Apply

3.0 - 8.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Professional Skill:Business Analysis,Analytical Thinking, Problem Solving, Decision Making, Leadership, Managerial, Time Management, Domain Knowledge. Work simplification - methods that maximize output while minimizing expenditure and cost. Required Candidate profile Analytics with Data - interprets data and turns it into information which can offer ways to improve a business. Communication - Good verbal communication and interpersonal skills are essential

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Role & responsibilities We are looking for a highly skilled Python Developer with hands-on experience in Artificial Intelligence (AI) , Machine Learning (ML) , and Large Language Models (LLMs) . The ideal candidate will have strong programming skills and a deep understanding of modern ML techniques and transformer-based models like GPT, BERT, LLaMA, or similar. Key Responsibilities: Develop, fine-tune, and deploy ML and LLM-based solutions for various business use cases. Work on LLM pipelines including prompt engineering, retrieval-augmented generation (RAG), vector search (FAISS, Pinecone, etc.). Build, evaluate, and maintain NLP models for text classification, summarization, Q&A, semantic search, etc. Design scalable and efficient APIs and backend systems in Python for AI-powered applications. Collaborate with Data Scientists, ML Engineers, and Product Managers to deliver AI features end-to-end. Implement MLOps best practices for model training, evaluation, deployment, and monitoring. Stay updated with the latest in GenAI, LLM fine-tuning, prompt engineering, and open-source models. Required Skills: Strong programming in Python (NumPy, Pandas, FastAPI/Flask). Proficient in ML frameworks: Scikit-learn, TensorFlow, PyTorch . Hands-on experience with transformers/LLMs using Hugging Face Transformers, LangChain, or OpenAI API . Knowledge of LLM fine-tuning, embeddings, tokenization , and attention mechanisms . Familiarity with vector databases like FAISS, Pinecone, Weaviate . Experience with cloud platforms (AWS, GCP, Azure) and container tools (Docker/Kubernetes). Good understanding of data preprocessing, model evaluation , and performance tuning . Preferred/Bonus Skills: Experience with LangChain, LlamaIndex , or building RAG pipelines. Exposure to open-source LLMs like Mistral, LLaMA2/3, Falcon, etc. Understanding of RLHF (Reinforcement Learning with Human Feedback) . Experience integrating LLMs in chatbots, virtual assistants , or enterprise automation . Knowledge of MLOps tools : MLflow, Weights & Biases, SageMaker, etc. interested candidates can share your resume to sarvani.j@ifinglobalgroup.com

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Data Science Trainer Company : LMES Academy Private Limited Location : Pallavaram, Chennai (online) Experience : 6+ Years in Data Science with Python Employment Type : Part-Time (2 Days/Week + Doubt Clearing Sessions) About Us LMES Academy is a leading educational platform dedicated to empowering students with industry-relevant skills through innovative and practical learning experiences. We're on a mission to bridge the gap between academic knowledge and real-world applications. Job Description We are seeking an experienced Data Science Trainer with deep expertise in Python and applied data science techniques. The ideal candidate will have a passion for teaching and mentoring, with the ability to simplify complex concepts for learners. Roles & Responsibilities Deliver interactive training sessions on Data Science and Python on any 2 weekdays (Monday to Friday) Conduct doubt clarification sessions twice a week , ensuring students grasp concepts effectively Develop training content, real-world case studies, and project-based learning materials Guide students in understanding core concepts such as: Data wrangling and preprocessing Exploratory data analysis Statistical modeling and machine learning Python libraries (Pandas, NumPy, Scikit-learn, Matplotlib, etc.) Evaluate student progress and provide constructive feedback Stay updated with latest trends in Data Science & AI Requirements Minimum 6 years of hands-on experience in Data Science and Python Strong knowledge of machine learning algorithms , data visualization , and model evaluation techniques Prior experience in teaching/training (preferred but not mandatory) Excellent communication and presentation skills Passion for mentoring and student success

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary: We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. Experience: 1 Year Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. Create simulation environments to test scheduling models under different scenarios. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real-world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. Minimum 1 year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Experience with: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma Soft Skills: Strong problem-solving mindset with ability to balance technical depth and business context. Excellent communication and storytelling skills to convey insights to both technical and non-technical stakeholders. Eagerness to learn new tools, technologies, and domain knowledge.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies