Jobs
Interviews

4556 Numpy Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

10 - 15 Lacs

Chennai

Work from Office

Role & responsibilities We are seeking an experienced Django Python API Developer who combines hands-on technical expertise with strong leadership and mentoring capabilities. You will lead and develop a team of engineers, take ownership of end-to-end delivery, and ensure high-quality, scalable solutions in compliance with industry and government standards. Key Responsibilities - Lead API development with Python, Django, and Django REST Framework. - Architect scalable backend services; design and enforce API standards (OpenAPI/Swagger). - Implement and oversee deployment pipelines (Jenkins, GitHub Actions) and container orchestration (Docker, Kubernetes). - Provision and manage infrastructure using Terraform, CloudFormation, or ARM templates. - Mentor backend team: code reviews, pair programming, and technical workshops. - Collaborate with frontend leads, QA, and operations to ensure end-to-end delivery. Required Technical Skills - Python 3.7+, Django, Django REST Framework. - PostgreSQL, MySQL, or MongoDB performance tuning. - CI/CD (Jenkins, GitHub Actions, Azure DevOps); Docker, Kubernetes; Terraform or CloudFormation. - Security and compliance (OWASP, GIGW guidelines). Preferred candidate profile Preferred Experience - Experience leading e-Governance or public-sector initiatives. - Working knowledge of message brokers (RabbitMQ, Kafka) and serverless architectures. - Cloud deployments and observability solutions. Soft Skills & Attributes - Leadership & Mentorship: Demonstrated ability to lead and develop backend teams. - Hands-On Approach: Deep involvement in coding, architecture, and deployment. - Ownership & Accountability: Ensures reliable, high-quality API services. - Communication: Articulates technical vision and aligns stakeholders

Posted 2 weeks ago

Apply

1.5 - 2.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Dataiku exp of at least 1.5-2 years. Good to know on creating and handling partitioned dataset in Dataiku. Good with Python, data handling using pandas, numpy (Pandas and numpy are must and must know it in depth) and basics of regex. Should be able to work on GCP big query and use terraform as base for managing the code changes.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Ahmedabad

Work from Office

We are seeking a skilled Python Developer to join our team. The ideal candidate will be responsible for working with existing APIs or developing new APIs based on our requirements. You should have a strong foundation in Python and experience with RESTful services and cloud infrastructure. Requirements: Strong understanding of Python Experience with RESTful services and cloud infrastructure Ability to develop microservices/functions Familiarity with libraries such as Pandas, NumPy, Matplotlib & Seaborn, Scikit-learn, Flask , Django, Requests, FastAPI and TensorFlow & PyTorch. Basic understanding of SQL and databases Ability to write clean, maintainable code Experience deploying applications at scale in production environments Experience with web scraping using tools like BeautifulSoup, Scrapy, or Selenium Knowledge of equities, futures, or options microstructures is a plus Experience with data visualization and dashboard building is a plus Why Join Us? Opportunity to work on high-impact real-world projects Exposure to cutting-edge technologies and financial datasets A collaborative, supportive, and learning-focused team culture 5-day work week (Monday to Friday)

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 15 Lacs

Pune

Work from Office

Key Responsibilities: Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. Sound understanding of supervised learning, recommendation systems, and classification algorithms. Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Company Description NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. Job Description Write complex algorithms to get an optimal solution for real time problems Qualitative analysis and data mining to extract data, discover hidden patterns, and develop predictive models based on findings Developing processes to extract, transform and load data Use distributed computing to validate and process large volumes of data to deliver insights Evaluate technologies we can leverage, including open-source frameworks, libraries, and tools Interface with product and other engineering teams on a regular cadence Qualifications 3+ years of applicable data engineering experience, including Python & RESTful APIs In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Strong fundamentals in data mining & data processing methodologies Strong knowledge of data structures, algorithms and designing for performance, scalability and availability Sound understanding of Big Data & RDBMS technologies, such as SQL, Hive, Spark, Databricks, Snowflake or Postgresql Orchestration and messaging frameworks: Airflow Good experience working with Azure cloud platform Good experience working in containerization framework, Docker is a plus. Experience in agile software development practices and DevOps is a plus Knowledge of and Experience with Kubernetes is a plus Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.E. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 2 weeks ago

Apply

5.0 - 10.0 years

30 - 45 Lacs

Hyderabad, Chennai

Hybrid

Salary : 30 to 45 LPA Exp: 6 to 10 years Location :Hyderabad (Hybrid) Notice: Immediate to 30 days..!! Roles & responsibilities: 5+ years exp on Python , ML and Banking model development Interact with the client to understand their requirements and communicate / brainstorm solutions, model Development: Design, build, and implement credit risk model. Contribute to how analytical approach is structured for specification of analysis Contribute insights from conclusions of analysis that integrate with initial hypothesis and business objective. Independently address complex problems 5+ years exp on ML/Python (predictive modelling) . Design, implement, test, deploy and maintain innovative data and machine learning solutions to accelerate our business. Create experiments and prototype implementations of new learning algorithms and prediction techniques Collaborate with product managers, and stockholders to design and implement software solutions for science problems Use machine learning best practices to ensure a high standard of quality for all of the team deliverables Has experience working on unstructured data ( text ): Text cleaning, TFIDF, text vectorization Hands-on experience with IFRS 9 models and regulations. Data Analysis: Analyze large datasets to identify trends and risk factors, ensuring data quality and integrity. Statistical Analysis: Utilize advanced statistical methods to build robust models, leveraging expertise in R programming. Collaboration: Work closely with data scientists, business analysts, and other stakeholders to align models with business needs. Continuous Improvement: Stay updated with the latest methodologies and tools in credit risk modeling and R programming.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

10 - 17 Lacs

Hyderabad

Hybrid

Summary: Seeking a highly skilled and experienced Senior Python Engineer who will be responsible for designing, developing, and maintaining high-quality, scalable, and reliable software solutions. Experience: • 8+ years of professional experience in Python software development. • Proven experience in designing and developing scalable and reliable applications. • Experience with Agile development methodologies. • Technical Skills: • Strong proficiency in Python and related frameworks (e.g., Django, Flask, FastAPI). • Solid understanding of object-oriented programming principles and design patterns. • Experience with relational databases (e.g., Microsoft SQL , MySQL) • Experience with cloud platforms (e.g. Azure, GCP). • Experience with containerization technologies (e.g. Docker, Kubernetes). • Experience with testing frameworks (e.g., pytest, unittest). • Experience with CI/CD pipelines (e.g., Jenkins, GitLab) • Familiarity with Linux/Unix environments.

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

India

Remote

Location: Remote (India) Type: Full-Time Experience: 0–1 year Industry: Artificial Intelligence / Data Science / Tech About Us: We’re a fast-moving, AI-native startup building scalable intelligent systems that harness the power of data to solve real-world problems. From LLMs to predictive modeling, we thrive on transforming raw information into actionable intelligence using the latest in machine learning and data automation. Role Overview: We’re hiring a Junior Data Analyst to join our remote team. You'll collaborate with data scientists, ML engineers, and product teams to clean, analyze, and structure datasets that power next-gen AI products. If you love data, patterns, and productivity hacks using AI tools — this is your chance to break into the AI industry. Key Responsibilities: Clean, preprocess, and organize large volumes of structured and unstructured data Conduct exploratory data analysis (EDA) to uncover trends, patterns, and insights Support feature engineering and contribute to AI/ML model preparation Develop dashboards, reports, and visualizations using Power BI, Tableau, Seaborn, etc. Use tools like Python, SQL, Excel , and AI assistants to streamline repetitive tasks Collaborate cross-functionally to support data-driven decision-making Maintain documentation for data workflows and ensure data integrity Tech Stack & Tools You'll Work With: Languages: Python (Pandas, NumPy), SQL, R (optional) Data Tools: Jupyter, Excel/Google Sheets, BigQuery, Snowflake (optional) Visualization: Power BI, Tableau, matplotlib, Seaborn, Plotly Productivity + AI Tools: Gemini CLI , Claude Code , Cursor , ChatGPT Code Interpreter , etc. Project Tools: GitHub, Notion, Slack, Jira You’re a Great Fit If You Have: A strong analytical mindset and curiosity for patterns in data Solid foundation in data cleaning, transformation, and EDA Basic understanding of databases and querying using SQL Familiarity with at least one scripting language (preferably Python) Interest in AI and how data powers intelligent systems Bonus: Experience with AI programming assistants or interest in using them Requirements: Bachelor’s degree in Data Science, Statistics, Computer Science, Mathematics , or related field 0–1 year of professional experience in a data-focused role Strong communication skills and ability to work independently in a remote setup Based in India with reliable internet access and a good work-from-home environment What You’ll Get: Work at the intersection of data analytics and AI Remote-first flexibility and asynchronous work culture Mentorship from experienced data scientists and AI engineers Exposure to real-world AI projects and best practices Access to premium AI productivity tools and training resources Opportunity to grow into senior analytics or data science roles

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

This role is for one of AccioJob’s hiring partners: Global Consulting and Services CTC: ₹3 – ₹6 LPA Job Title: Python Automation Engineer Location: Pune (Onsite) Job Type: Full-Time Eligibility Criteria: Degrees: B.Tech / B.E, M.Tech / M.E, BCA, MCA, B.Sc, M.Sc Branches: All Graduation Year: 2023, 2024, 2025 The Role: As a Python Automation Engineer, you will work on automating workflows, data handling, and analytical processes using Python and related libraries. This role involves writing efficient scripts, working with large datasets, and streamlining operations in collaboration with cross-functional teams. Key Responsibilities: Automate tasks and data pipelines using Python Work with Excel files and perform data manipulation using Pandas and NumPy Write and optimize SQL queries for data retrieval and reporting Collaborate with teams to understand automation requirements and deliver scalable solutions Ensure code quality and maintain documentation for automated processes What You’ll Bring: Proficiency in Python , especially for scripting and automation Strong understanding of Pandas and NumPy for data handling Hands-on experience with Excel and data formatting tasks Basic to intermediate knowledge of SQL Strong logical thinking and problem-solving abilities Ability to work independently and meet deadlines Evaluation Process: Round 1: Offline Assessment at AccioJob Pune Centre Further Rounds (For shortlisted candidates): Profile & Background Screening Round Technical Interview 1 Technical Interview 2 Tech + Managerial Round Note: Candidates are required to bring their laptop and earphones for the assessment. Skills Required: Excel, Python, Pandas, NumPy, SQL

Posted 2 weeks ago

Apply

3.0 - 8.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Professional Skill:Business Analysis,Analytical Thinking, Problem Solving, Decision Making, Leadership, Managerial, Time Management, Domain Knowledge. Work simplification - methods that maximize output while minimizing expenditure and cost. Required Candidate profile Analytics with Data - interprets data and turns it into information which can offer ways to improve a business. Communication - Good verbal communication and interpersonal skills are essential

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Role & responsibilities We are looking for a highly skilled Python Developer with hands-on experience in Artificial Intelligence (AI) , Machine Learning (ML) , and Large Language Models (LLMs) . The ideal candidate will have strong programming skills and a deep understanding of modern ML techniques and transformer-based models like GPT, BERT, LLaMA, or similar. Key Responsibilities: Develop, fine-tune, and deploy ML and LLM-based solutions for various business use cases. Work on LLM pipelines including prompt engineering, retrieval-augmented generation (RAG), vector search (FAISS, Pinecone, etc.). Build, evaluate, and maintain NLP models for text classification, summarization, Q&A, semantic search, etc. Design scalable and efficient APIs and backend systems in Python for AI-powered applications. Collaborate with Data Scientists, ML Engineers, and Product Managers to deliver AI features end-to-end. Implement MLOps best practices for model training, evaluation, deployment, and monitoring. Stay updated with the latest in GenAI, LLM fine-tuning, prompt engineering, and open-source models. Required Skills: Strong programming in Python (NumPy, Pandas, FastAPI/Flask). Proficient in ML frameworks: Scikit-learn, TensorFlow, PyTorch . Hands-on experience with transformers/LLMs using Hugging Face Transformers, LangChain, or OpenAI API . Knowledge of LLM fine-tuning, embeddings, tokenization , and attention mechanisms . Familiarity with vector databases like FAISS, Pinecone, Weaviate . Experience with cloud platforms (AWS, GCP, Azure) and container tools (Docker/Kubernetes). Good understanding of data preprocessing, model evaluation , and performance tuning . Preferred/Bonus Skills: Experience with LangChain, LlamaIndex , or building RAG pipelines. Exposure to open-source LLMs like Mistral, LLaMA2/3, Falcon, etc. Understanding of RLHF (Reinforcement Learning with Human Feedback) . Experience integrating LLMs in chatbots, virtual assistants , or enterprise automation . Knowledge of MLOps tools : MLflow, Weights & Biases, SageMaker, etc. interested candidates can share your resume to sarvani.j@ifinglobalgroup.com

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Data Science Trainer Company : LMES Academy Private Limited Location : Pallavaram, Chennai (online) Experience : 6+ Years in Data Science with Python Employment Type : Part-Time (2 Days/Week + Doubt Clearing Sessions) About Us LMES Academy is a leading educational platform dedicated to empowering students with industry-relevant skills through innovative and practical learning experiences. We're on a mission to bridge the gap between academic knowledge and real-world applications. Job Description We are seeking an experienced Data Science Trainer with deep expertise in Python and applied data science techniques. The ideal candidate will have a passion for teaching and mentoring, with the ability to simplify complex concepts for learners. Roles & Responsibilities Deliver interactive training sessions on Data Science and Python on any 2 weekdays (Monday to Friday) Conduct doubt clarification sessions twice a week , ensuring students grasp concepts effectively Develop training content, real-world case studies, and project-based learning materials Guide students in understanding core concepts such as: Data wrangling and preprocessing Exploratory data analysis Statistical modeling and machine learning Python libraries (Pandas, NumPy, Scikit-learn, Matplotlib, etc.) Evaluate student progress and provide constructive feedback Stay updated with latest trends in Data Science & AI Requirements Minimum 6 years of hands-on experience in Data Science and Python Strong knowledge of machine learning algorithms , data visualization , and model evaluation techniques Prior experience in teaching/training (preferred but not mandatory) Excellent communication and presentation skills Passion for mentoring and student success

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary: We are seeking a proactive and detail-oriented Data Scientist to join our team and contribute to the development of intelligent AI-driven production scheduling solutions. This role is ideal for candidates passionate about applying machine learning, optimization techniques, and operational data analysis to enhance decision-making and drive efficiency in manufacturing or process industries. You will play a key role in designing, developing, and deploying smart scheduling algorithms integrated with real-world constraints like machine availability, workforce planning, shift cycles, material flow, and due dates. Experience: 1 Year Responsibilities: 1. AI-Based Scheduling Algorithm Development Develop and refine scheduling models using: Constraint Programming Mixed Integer Programming (MIP) Metaheuristic Algorithms (e.g., Genetic Algorithm, Ant Colony, Simulated Annealing) Reinforcement Learning or Deep Q-Learning Translate shop floor constraints (machines, manpower, sequence dependencies, changeovers) into mathematical models. Create simulation environments to test scheduling models under different scenarios. 2. Data Exploration & Feature Engineering Analyze structured and semi-structured production data from MES, SCADA, ERP, and other sources. Build pipelines for data preprocessing, normalization, and handling missing values. Perform feature engineering to capture important relationships like setup times, cycle duration, and bottlenecks. 3. Model Validation & Deployment Use statistical metrics and domain KPIs (e.g., throughput, utilization, makespan, WIP) to validate scheduling outcomes. Deploy solutions using APIs, dashboards (Streamlit, Dash), or via integration with existing production systems. Support ongoing maintenance, updates, and performance tuning of deployed models. 4. Collaboration & Stakeholder Engagement Work closely with production managers, planners, and domain experts to understand real-world constraints and validate model results. Document solution approaches, model assumptions, and provide technical training to stakeholders. Qualifications: Bachelor’s or Master’s degree in: Data Science, Computer Science, Industrial Engineering, Operations Research, Applied Mathematics, or equivalent. Minimum 1 year of experience in data science roles with exposure to: AI/ML pipelines, predictive modelling, Optimization techniques or industrial scheduling Proficiency in Python, especially with: pandas, numpy, scikit-learn ortools, pulp, cvxpy or other optimization libraries, matplotlib, plotly for visualization Solid understanding of: Production planning & control processes (dispatching rules, job-shop scheduling, etc.), Machine Learning fundamentals (regression, classification, clustering) Familiarity with version control (Git), Jupyter/VSCode environments, and CI/CD principles Preferred (Nice-to-Have) Skills: Experience with: Time-series analysis, sensor data, or anomaly detection, Manufacturing execution systems (MES), SCADA, PLC logs, or OPC UA data, Simulation tools (SimPy, Arena, FlexSim) or digital twin technologies Exposure to containerization (Docker) and model deployment (FastAPI, Flask) Understanding of lean manufacturing principles, Theory of Constraints, or Six Sigma Soft Skills: Strong problem-solving mindset with ability to balance technical depth and business context. Excellent communication and storytelling skills to convey insights to both technical and non-technical stakeholders. Eagerness to learn new tools, technologies, and domain knowledge.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

Mohali district, India

Remote

Job Description: SDE-II – Python Developer Job Title SDE-II – Python Developer Department Operations Location In-Office Employment Type Full-Time Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Strong debugging, problem-solving, and optimization skills. • Experience with API development and microservices architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Tamil Nadu, India

On-site

We are looking for a seasoned Senior MLOps Engineer to join our Data Science team. The ideal candidate will have a strong background in Python development, machine learning operations, and cloud technologies. You will be responsible for operationalizing ML/DL models and managing the end-to-end machine learning lifecycle from model development to deployment and monitoring while ensuring high-quality and scalable solutions. Mandatory Skills: Python Programming: Expert in OOPs concepts and testing frameworks (e.g., PyTest) Strong experience with ML/DL libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Prophet, NumPy, Pandas) MLOps & DevOps: Proven experience in executing data science projects with MLOps implementation CI/CD pipeline design and implementation Docker (Mandatory) Experience with ML lifecycle tracking tools such as MLflow, Weights & Biases (W&B), or cloud-based ML monitoring tools Experience in version control (Git) and infrastructure-as-code (Terraform or CloudFormation) Familiarity with code linting, test coverage, and quality tools such as SonarQube Cloud & Orchestration: Hands-on experience with AWS SageMaker or GCP Vertex AI Proficiency with orchestration tools like Apache Airflow or Astronomer Strong understanding of cloud technologies (AWS or GCP) Software Engineering: Experience in building backend APIs using Flask, FastAPI, or Django Familiarity with distributed systems for model training and inference Experience working with Feature Stores Deep understanding of the ML/DL lifecycle from ideation, experimentation, deployment to model sunsetting Understanding of software development best practices, including automated testing and CI/CD integration Agile Practices: Proficient in working within a Scrum/Agile environment using tools like JIRA Cross-Functional Collaboration: Ability to collaborate effectively with product managers, domain experts, and business stakeholders to align ML initiatives with business goals Preferred Skills: Experience building ML solutions for: (Any One) Sales Forecasting Marketing Mix Modelling Demand Forecasting Certified in machine learning or cloud platforms (e.g., AWS or GCP) Strong communication and documentation skills

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Scope of the Role: The Principal Data Scientist will lead the architecture, development, and deployment of cutting-edge AI solutions, driving innovation in machine learning, deep learning, and generative AI. The role demands cutting edge expertise in advanced Gen AI development, Agentic AI development, optimization, and integration with enterprise-scale applications while fostering an experimental and forward-thinking environment. This senior level hands-on role offers an immediate opportunity to lead next-gen AI innovation, drive strategic AI initiatives, and shape the future of AI adoption at scale in large enterprise and industry applications. Reports To: Chief AI Officer Reportees: Individual Contributor Role Minimum Qualification: Bachelor’s or Master’s/PhD in Artificial Intelligence, Machine Learning, Data Science, Computer Science, or a related field. Advanced certifications in AI/ML frameworks, GenAI, Agentic AI, cloud AI platforms, or AI ethics preferred. Experience: A minimum of 15+ years of experience in AI/ML, with a proven track record in deploying AI-driven solutions – and deep expertise and specialty in Generative AI (use of proprietary and open source LLMs/ SLMs/ VLMs/ LCMs, experience in RAG, fine tuning and developing derived domain-specific models, multi agent frameworks and agentic architecture, LLMOps) Expertise in implementing the Data Solutions using Python programming, pandas, numpy, sklearn, pytorch, tensorflow, Data Visualizations, Machine Learning Algorithms, Deep Learning architectures, LLMs, SLM, VLM, LCM, generative AI, agents, prompt engineering, NLP, Transformer Architectures, GPTs, Computer Vision, and ML Ops for both unstructured and structured data, synthetic data generation Experience in building generic and customized Conversational AI Assistants Experience working in innovation-driven, research-intensive, or AI R&D-focused organizations. Experience in building AI solutions for Construction or Manufacturing, Oil & Gas industries would be good to have. Objective / Purpose: The Principal Data Scientist will drive breakthrough AI innovation for pan-L&T businesses, by developing the scalable, responsible, and production-ready Gen AI and Agentic AI models using RAG with multi LLMs and derived domain specific models where applicable. This role involves cutting-edge research, model deployment, AI infrastructure optimization, and AI strategy formulation to enhance business capabilities and user experiences. Key Responsibilities: AI Model Development & Deployment: Design, train, and deploy ML/DL models for predictive analytics, NLP, computer vision, generative AI and AI Agents. Applied AI Research & Innovation: Explore emerging AI technologies such as LLMs, RAG (Retrieval-Augmented Generation), fine tuning techniques, multi-modal AI, agentic architectures, reinforcement learning, and self-supervised learning with application-orientation. Model Optimization & Scalability: Optimize AI models for inference efficiency, explainability, and responsible AI compliance. AI Product Integration: Work collaboratively with business teams, data engineers, ML Ops teams, and software developers to integrate AI models into applications, APIs, and cloud platforms. AI Governance & Ethics: Ensure compliance with AI fairness, bias mitigation, and regulatory frameworks (GDPR, CCPA, AI Act). Cross-functional Collaboration: Partner with business teams, UX researchers, and domain experts to align AI solutions with real-world applications. AI Infrastructure & Automation: Develop automated pipelines, Agentic model monitoring, and CI/CD for AI Solutions . Technical Expertise: Machine Learning & Deep Learning: TensorFlow, PyTorch, Scikit-learn, Regression, Classification, clustering, Ensembling techniques – bagging, boosting, recommender systems, Probability distributions and data visualizations Generative AI & LLMs: OpenAI GPT, Google Gemini, Llama, Hugging Face Transformers, RAG, CAG, KAG, knowledge on Other LLMs, SLM, VLM, LCM, Langchain NLP & Speech AI: BERT, T5, Whisper Computer Vision: YOLO, OpenCV, CLIP, Convolutional Neural Networks(CNN). ML Ops & AI Infrastructure: ML flow, Kubeflow, Azure ML. Data Platforms : Databricks, Pinecone, FAISS, Elasticsearch, Semantic Search, Milvus, Weaviate Cloud AI Services: Azure Open AI & Cognitive Services. Explainability & Responsible AI: SHAP, LIME, FairML. Prompt Engineering Publish internal/external research papers, contribute to AI patents, and present at industry conferences and workshops. Evaluate Open-Source AI/ML frameworks and commercial AI products to enhance the organization’s AI capabilities. Behavioural Attributes: Business Acumen – Ability to align AI solutions with business goals. Market Foresight – Identifying AI trends and emerging technologies. Change Management – Driving AI adoption in dynamic environments. Customer Centricity – Designing AI solutions with user impact in mind. Collaborative Leadership – Working cross-functionally with diverse teams. Ability to Drive Innovation & Continuous Improvement – Research-driven AI development. Key Value Drivers: Advancing AI-driven business transformation – securely, optimally and at scale Reducing time-to-value for AI-powered innovations. Enabling AI governance, compliance, and ethical AI adoption. Future Career Path : The future career path will be establishing oneself as a world class SME having deep domain experience and thought leadership around application of next gen AI technologies in the construction, energy and manufacturing domains. Career would progress from technology specialization into leadership roles, in various challenging positions such as Head – AI Strategy and Chief AI Officer, in driving the enterprise AI strategies, governance, and innovation for various ICs/BUs across the pan L&T Businesses.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The Data Visualization Engineer position at Zoetis India Capability Center (ZICC) in Hyderabad offers a unique opportunity to be part of a team that drives transformative advancements in animal healthcare. As a key member of the pharmaceutical R&D team, you will play a crucial role in creating insightful and interactive visualizations to support decision-making in drug discovery, development, and clinical research. Your responsibilities will include designing and developing a variety of visualizations, from interactive dashboards to static visual representations, to summarize key insights from high-throughput screening and clinical trial data. Collaborating closely with cross-functional teams, you will translate complex scientific data into clear visual narratives tailored to technical and non-technical audiences. In this role, you will also be responsible for maintaining and optimizing visualization tools, ensuring alignment with pharmaceutical R&D standards and compliance requirements. Staying updated on the latest trends in visualization technology, you will apply advanced techniques like 3D molecular visualization and predictive modeling visuals to enhance data representation. Working with various stakeholders such as data scientists, bioinformaticians, and clinical researchers, you will integrate, clean, and structure datasets for visualization purposes. Your role will also involve collaborating with Zoetis Tech & Digital teams to ensure seamless integration of IT solutions and alignment with organizational objectives. To excel in this position, you should have a Bachelor's or Master's degree in Computer Science, Data Science, Bioinformatics, or a related field. Experience in the pharmaceutical or biotech sectors will be a strong advantage. Proficiency in visualization tools such as Tableau, Power BI, and programming languages like Python, R, or JavaScript is essential. Additionally, familiarity with data handling tools, omics and network visualization platforms, and dashboarding tools will be beneficial. Soft skills such as strong storytelling ability, effective communication, collaboration with interdisciplinary teams, and analytical thinking are crucial for success in this role. Travel requirements for this full-time position are minimal, ranging from 0-10%. Join us at Zoetis and be part of our journey to pioneer innovation and drive the future of animal healthcare through impactful data visualization.,

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Wissen Technology is Hiring for Java + Python Developer About Wissen Technology: Wissen Technology is a globally recognized organization known for building solid technology teams, working with major financial institutions, and delivering high-quality solutions in IT services. With a strong presence in the financial industry, we provide cutting-edge solutions to address complex business challenges. Role Overview: We’re looking for a versatile Java + Python Developer who thrives in backend development and automation. You'll be working on scalable systems, integrating third-party services, and contributing to high-impact projects across fintech/data platforms/cloud-native applications Experience: 2-10 Years Location: Bengaluru Key Responsibilities: Design, develop, and maintain backend services using Java and Python Build and integrate RESTful APIs, microservices, and data pipelines Write clean, efficient, and testable code across both Java and Python stacks Work on real-time, multithreaded systems and optimize performance Collaborate with DevOps and data engineering teams on CI/CD, deployment, and monitoring Participate in design discussions, peer reviews, and Agile ceremonies Required Skills: 2–10 years of experience in software development Strong expertise in Core Java (8+) and Spring Boot Proficient in Python (data processing, scripting, API development) Solid understanding of data structures, algorithms, and multithreading Hands-on experience with REST APIs, JSON, SQL/NoSQL (PostgreSQL, MongoDB, etc.) Familiarity with Git, Maven/Gradle, Jenkins, Agile/Scrum Preferred Skills: Experience with Kafka, RabbitMQ, or message queues Cloud services (AWS, Azure, or GCP) Knowledge of data engineering tools (Pandas, NumPy, PySpark, etc.) Docker/Kubernetes familiarity Exposure to ML/AI APIs or DevOps scripting The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products. We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation. Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work® Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling in all dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie. Website: www.wissen.com LinkedIn: https://www.linkedin.com/company/wissen-technology Wissen Leadership: https://www.wissen.com/company/leadership-team/ Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All Wissen Thought Leadership: https://www.wissen.com/articles/ Employee Speak: https://www.ambitionbox.com/overview/wissen-technology-overview https://www.glassdoor.com/Reviews/Wissen-Infotech-Reviews-E287365.htm Great Place to Work: https://www.wissen.com/blog/wissen-is-a-great-place-to-work-says-the-great-place-to-work-institute-india/ https://www.linkedin.com/posts/wissen-infotech_wissen-leadership-wissenites-activity-6935459546131763200-xF2k

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are seeking for a Data Scientist/ML Engineer with experience in NLP, statistics, and Conversational AI. The ideal candidate is adept at using Statistical Modelling, Machine learning algorithms and Conversational AI. Strong experience using variety of data mining/ Machine Learning methods, using/coding algorithms and running simulations. The right candidate will have a passion for discovering solutions hidden in large data sets and working with business partners to improve business outcomes. Minimum Qualifications Skills and Qualifications: Doctorate (Academic) Degree with 0 years industry experience; Master's Level Degree and related work experience of 3 years; Bachelor's Level Degree and related work experience of 5 years in computer science, statistics, applied mathematics, or related discipline Experience solving problems with Conversational AI in a production environment for a High-tech Manufacturing Industry Experience in Python and ML packages (NumPy, Keras, Seaborn, Scikit-learn). Excellent understanding of machine learning techniques and algorithms such as, Regression models, kNN, SVM,NB, Decision Trees. Expertise/Familiarity with one or more deep learning frameworks such as TensorFlow, PyTorch. Responsibilities for Data Scientist/ML Engineer: Building end-to-end ML systems (Including traditional ML models and Conversational AI) and deploying them to operate at scale. Research and devise innovative statistical models/ML models for data analysis Develop and maintain RAG (Retrieval-Augmented Generation) applications. Implement changes to algorithms to improve AI performance and retrieval techniques. Use predictive modeling to increase and optimize customer service experience and supply chain optimization. Identify patterns in data using statistical modeling, develop custom data models and algorithms to apply to data sets. Work with business partners throughout the organization to find opportunities for using company data to drive solutions. Working with project managers to establish objectives for AI systems.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Type : Data Specialist Experience : 4+ Years Location : Bangalore Notice Period : Immediate to 15 Days Work Mode : 5 Days (WFO) Candidate should be available to take F2F in Bangalore Key Responsibilities Design and build scalable machine learning models to solve real-world business problems. Translate business requirements into clear analytical solutions using data science techniques. Write optimized SQL queries to extract, manipulate, and analyze large datasets. Develop and deploy production-ready code using Python. Collaborate with data engineers, product managers, and business stakeholders to build end-to-end data products. Communicate complex findings in a clear, concise manner to non-technical stakeholders. Continuously monitor and improve the performance of deployed models. Mentor junior data scientists and review their code/models as needed. Requirements/Skill sets Bachelor's or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. 4+ years of hands-on experience in data science, analytics, or machine learning. Expert proficiency in Python , including libraries like pandas, scikit-learn, NumPy, etc. Strong experience in SQL for data extraction and transformation. Proven ability to build and evaluate supervised and unsupervised ML models. Experience with model deployment, version control (e.g., Git), and working in cloud environments (AWS/GCP/Azure). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Description: We are seeking an experienced Traffic Data Scientist or Transportation Analyst to join our dynamic team. This role is pivotal in transforming raw traffic data into actionable insights. The ideal candidate will have a strong background in analyzing and modeling traffic patterns, with proven experience in congestion analytics, junction analysis, and building predictive traffic models. This is a unique opportunity to shape the core intelligence of our product and make a significant impact on real-world traffic challenges. While a traditional traffic engineering background is valued, we are primarily looking for an individual with hands-on experience in the specific use cases mentioned below, regardless of their formal title. Key Responsibilities: Congestion Analytics: Develop and implement advanced models to identify, measure, and analyze traffic congestion. You will work to understand the root causes of congestion and predict its formation and dissipation. Junction Analytics: Analyze complex intersections and junctions to model traffic flow, identify bottlenecks, and suggest optimization strategies. Network Modeling: Design and build proprietary traffic network models on top of our existing data infrastructure to simulate traffic flow and test various scenarios. Predictive Analytics: Create, train, and deploy machine learning models to build a robust prediction engine for traffic conditions, including travel times, congestion levels, and incident impacts. Data-driven Insights: Collaborate with our software engineering team to integrate your models and algorithms into our frontend and backend systems, providing our users with actionable and easy-to-understand analytics. Cross-functional Collaboration: Work closely with our product and engineering teams to define feature requirements and contribute to the overall product roadmap. You will also mentor team members with a foundational knowledge of A&M. Qualifications and Skills: Proven experience in traffic analysis and modeling, with a strong portfolio of projects in congestion analytics, junction analysis, and traffic prediction. Proficiency in data analysis and programming languages such as Python or R. Hands-on experience with data science libraries and frameworks (e.g., pandas, NumPy, scikit-learn, TensorFlow, or PyTorch). Solid understanding of statistical analysis and machine learning techniques. Experience in building and validating predictive models for time-series data. Familiarity with GIS and geospatial data analysis is a strong plus. Excellent problem-solving skills and the ability to translate complex data into clear and actionable insights. Strong communication and collaboration skills, with the ability to work effectively in a team environment. A Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Civil Engineering (with a focus on transportation), or a related field is preferred.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

haryana

On-site

You will be responsible for designing, developing, and implementing well-tested, reusable, and maintainable Python code. Utilize various Python libraries and frameworks such as FastAPI, Django, Flask, Pandas, NumPy to implement functionalities. Your tasks will include integrating various data sources like APIs and databases to manipulate and analyze data. It will be essential to optimize code for performance, scalability, and security. You will also write unit and integration tests for code coverage and stability. Collaboration with designers and other developers to translate requirements into efficient solutions is a key aspect of this role. Additionally, you are expected to participate in code reviews, providing constructive feedback to enhance code quality. Staying up-to-date with the latest Python trends, libraries, and best practices is crucial. Debugging and troubleshooting complex issues to ensure optimal application performance will be part of your responsibilities. Proactively suggesting improvements and optimizations to the existing codebase will also be required.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

Appnext offers end-to-end discovery solutions covering all the touchpoints users have with their devices. Thanks to Appnext's direct partnerships with top OEM brands and carriers, user engagement is achieved from the moment they personalize their device for the first time and throughout their daily mobile journey. Appnext Timeline, a patented behavioral analytics technology, is uniquely capable of predicting the apps users are likely to need next. This innovative solution means app developers and marketers can seamlessly engage with users directly on their smartphones through personalized, contextual recommendations. Established in 2012 and now with 12 offices globally, Appnext is the fastest-growing and largest independent mobile discovery platform in emerging markets. As a Machine Learning Engineer, you will be in charge of building end-to-end machine learning pipelines that operate at a huge scale, from data investigation, ingestions and model training to deployment, monitoring, and continuous optimization. You will ensure that each pipeline delivers measurable impact through experimentation, high-throughput inference, and seamless integration with business-critical systems. This job combines 70% machine learning engineering and 30% algorithm engineering and data science. We're seeking an Adtech pro who thrives in a team environment, possesses exceptional communication and analytical skills, and can navigate high-pressure demands of delivering results, taking ownership, and leveraging sales opportunities. Responsibilities: Build ML pipelines that train on real big data and perform on a massive scale. Handle a massive responsibility, Advertise on lucrative placement (Samsung appstore, Xiaomi phones, TrueCaller). Train models that will make billions of daily predictions and affect hundreds of millions users. Optimize and discover the best solution algorithm to data problems, from implementing exotic losses to efficient grid search. Validate and test everything. Every step should be measured and chosen via AB testing. Use of observability tools. Own your experiments and your pipelines. Be Frugal. Optimize the business solution at minimal cost. Advocate for AI. Be the voice of data science and machine learning, answering business needs. Build future products involving agentic AI and data science. Affect millions of users every instant and handle massive scale Requirements: MSc in CS/EE/STEM with at least 5 years of proven experience (or BSc with equivalent experience) as a Machine Learning Engineer: strong focus on MLOps, data analytics, software engineering, and applied data science- Must Hyper communicator: Ability to work with minimal supervision and maximal transparency. Must understand requirements rigorously, while frequently giving an efficient honest picture of his/hers work progress and results. Flawless verbal English- Must Strong problem-solving skills, drive projects from concept to production, working incrementally and smart. Ability to own features end-to-end, theory, implementation, and measurement. Articulate data-driven communication is also a must. Deep understanding of machine learning, including the internals of all important ML models and ML methodologies. Strong real experience in Python, and at least one other programming language (C#, C++, Java, Go). Ability to write efficient, clear, and resilient production-grade code. Flawless in SQL. Strong background in probability and statistics. Experience with tools and ML models Experience with conducting A/B test. Experience with using cloud providers and services (AWS) and python frameworks: TensorFlow/PyTorch, Numpy, Pandas, SKLearn (Airflow, MLflow, Transformers, ONNX, Kafka are a plus). AI/LLMs assistance: Candidates have to hold all skills independently without using AI assist. With that candidates are expected to use AI effectively, safely and transparently. Preferred: Deep Knowledge in ML aspects including ML Theory, Optimization, Deep learning tinkering, RL, Uncertainty quantification, NLP, classical machine learning, performance measurement. Prompt engineering and Agentic workflows experience Web development skills Publication in leading machine learning conferences and/or medium blogs.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies