Jobs
Interviews

1965 Preprocessing Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Position: Data Scientist Intern Company: Evoastra Ventures Pvt. Ltd. Location: Remote Duration: 1 month Stipend: Unpaid Type: Internship (Remote) Open To: Students, freshers, and early professionals About Evoastra Ventures Evoastra Ventures is a research-first data and AI solutions company focused on delivering value through predictive analytics, market intelligence, and technology consulting. Our goal is to empower businesses by transforming raw data into strategic decisions. As an intern, you will work with real-world datasets and gain industry exposure that accelerates your entry into the data science domain. Role Overview We are seeking highly motivated and analytical individuals for our Data Scientist Internship. This role is designed to give you hands-on exposure to real datasets, problem-solving tasks, and model development under the guidance of industry professionals. Responsibilities Perform data cleaning, preprocessing, and transformation Conduct exploratory data analysis (EDA) and identify trends Assist in the development and evaluation of machine learning models Contribute to reports and visual dashboards summarizing key insights Document workflows and collaborate with team members on project deliverables Participate in regular project check-ins and mentorship discussions Tools & Technologies Python, NumPy, Pandas, Scikit-learn Matplotlib, Seaborn Jupyter Notebook GitHub (for collaboration and version control) Power BI or Google Data Studio (optional but preferred) Eligibility Criteria Basic knowledge of Python, statistics, and machine learning concepts Good analytical and problem-solving skills Willingness to learn and adapt in a remote, team-based environment Strong communication and time-management skills Laptop with stable internet connection What You Will Gain Verified Internship Certificate Letter of Recommendation (based on performance) Real-time mentorship from professionals in data science and analytics Project-based learning and portfolio-ready outputs Priority consideration for future paid internships or full-time roles at Evoastra Recognition in our internship alumni community Application Process Submit your resume via our internship application form at www.evoastra.in Selected candidates will receive an onboarding email with further steps This internship is fully remote and unpaid Build your career foundation with hands-on data science projects that make an impact.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview: We are seeking an Embedded AI Software Engineer with deep expertise in writing software for resource-constrained edge hardware. This role is critical to building optimized pipelines that leverage media encoders/decoders, hardware accelerators, and AI inference runtimes on platforms like NVIDIA Jetson, Hailo, and other edge AI SoCs. You will be responsible for developing highly efficient, low-latency modules that run on embedded devices, involving deep integration with NVIDIA SDKs (Jetson Multimedia, DeepStream, TensorRT) and broader GStreamer pipelines. Key Responsibilities: Media Pipeline & AI Model Integration Implement hardware-accelerated video processing pipelines using GStreamer, V4L2, and custom media backends. Integrate AI inference engines using NVIDIA TensorRT, DeepStream SDK, or similar frameworks (ONNX Runtime, OpenVINO, etc.). Profile and optimize model loading, preprocessing, postprocessing, and buffer management for edge runtime. System-Level Optimization Design software within strict memory, compute, and power budgets specific to edge hardware. Utilize multimedia capabilities (ISP, NVENC/NVDEC) and leverage DMA, zero-copy mechanisms where applicable. Implement fallback logic and error handling for edge cases in live deployment conditions. Platform & Driver-Level Work Work closely with kernel modules, device drivers, and board support packages to tune performance. Collaborate with hardware and firmware teams to validate system integration. Contribute to device provisioning, model updates, and boot-up behavior for AI edge endpoints. Required Skills & Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Engineering, Electronics, Embedded Systems, or related fields. Professional Experience: 2–4 years of hands-on development for edge/embedded systems using C++ (mandatory). Demonstrated experience with NVIDIA Jetson or equivalent edge AI hardware platforms. Technical Proficiency: Proficient in C++11/14/17 and multi-threaded programming. Strong understanding of video codecs, media IO pipelines, and encoder/decoder frameworks. Experience with GStreamer, V4L2, and multimedia buffer handling. Familiarity with TensorRT, DeepStream, CUDA, and NVIDIA’s multimedia APIs. Exposure to other runtimes like HailoRT, OpenVINO, or Coral Edge TPU SDK is a plus. Bonus Points Familiarity with build systems (CMake, Bazel), cross-compilation, and Yocto. Understanding of AI model quantization, batching, and layer fusion for performance. Prior experience working with camera bring-up, video streaming, and inference on live feeds. Contact Information: To apply, please send your resume and portfolio details to hire@condor-ai.com with “Application: Embedded AI Software Engineer” in the subject line. About Condor AI: Condor is an AI engineering company where we use artificial intelligence models to deploy solutions in the real world. Our core strength lies in Edge AI, combining custom hardware with optimized software for fast, reliable, on device intelligence. We work across smart cities, industrial automation, logistics, and security, with a team that brings over a decade of experience in AI, embedded systems, and enterprise grade solutions. We operate lean, think globally, and build for production from system design to scaled deployment.

Posted 2 weeks ago

Apply

1.0 years

1 - 3 Lacs

Chandigarh

On-site

Internship Overview: We are seeking enthusiastic and driven Machine Learning Interns to join our dynamic team for a 3-month intensive internship program. This internship offers a unique opportunity to gain hands-on experience in real-world ML applications and contribute to innovative projects. During the initial training phase, no stipend will be provided. A monthly stipend of ₹30,000 will only be provided if you are shortlisted and assigned to a live project after the successful completion of the training period, based on your demonstrated progress and performance. Post-Internship Commitment: Upon being shortlisted for a live project and receiving the stipend, a minimum service period of 1 year with the company is required. Should you choose to exit the company before completing this 1-year service period, a penalty of ₹360,000 will be applicable. Key Responsibilities: Assist in collecting, cleaning, and preprocessing data for machine learning models. Support the development, training, and evaluation of various ML models. Conduct research on different machine learning algorithms and techniques. Collaborate with senior engineers and data scientists on ongoing projects. Help in deploying and monitoring machine learning models. Document code, experiments, and findings clearly and concisely. Actively participate in team meetings and learning sessions. Qualifications: Currently pursuing or recently completed a Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, Statistics, or a related technical field. Strong foundational understanding of machine learning concepts and algorithms. Proficiency in at least one programming language commonly used in ML (e.g., Python, R). Familiarity with relevant ML libraries/frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Basic understanding of data structures and algorithms. Ability to learn quickly and adapt to new technologies. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities. What We Offer: A challenging and rewarding 3-month intensive training experience. Mentorship from experienced Machine Learning Engineers and Data Scientists. Exposure to real-world datasets and cutting-edge ML technologies. Opportunity to work on diverse projects that impact our business. Potential to be assigned to live projects with a monthly stipend of ₹30,000 upon successful training completion. A collaborative and supportive learning environment. Job Type: Internship Contract length: 15 months Pay: ₹10,000.00 - ₹30,635.66 per month Benefits: Flexible schedule Work Location: In person Expected Start Date: 01/08/2025

Posted 2 weeks ago

Apply

0 years

7 - 8 Lacs

Hyderābād

Remote

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant - ML/CV Ops Engineer ! We are seeking a highly skilled ML CV Ops Engineer to join our AI Engineering team. This role is focused on operationalizing Computer Vision models—ensuring they are efficiently trained, deployed, monitored , and retrained across scalable infrastructure or edge environments. The ideal candidate has deep technical knowledge of ML infrastructure, DevOps practices, and hands-on experience with CV pipelines in production. You’ll work closely with data scientists, DevOps, and software engineers to ensure computer vision models are robust, secure, and production-ready always. Key Responsibilities: End-to-End Pipeline Automation: Build and maintain ML pipelines for computer vision tasks (data ingestion, preprocessing, model training, evaluation, inference). Use tools like MLflow , Kubeflow, DVC, and Airflow to automate workflows. Model Deployment & Serving: Package and deploy CV models using Docker and orchestration platforms like Kubernetes. Use model-serving frameworks (TensorFlow Serving, TorchServe , Triton Inference Server) to enable real-time and batch inference. Monitoring & Observability: Set up model monitoring to detect drift, latency spikes, and performance degradation. Integrate custom metrics and dashboards using Prometheus, Grafana, and similar tools. Model Optimization: Convert and optimize models using ONNX, TensorRT , or OpenVINO for performance and edge deployment. Implement quantization, pruning, and benchmarking pipelines. Edge AI Enablement (Optional but Valuable): Deploy models on edge devices (e.g., NVIDIA Jetson, Coral, Raspberry Pi) and manage updates and logs remotely. Collaboration & Support: Partner with Data Scientists to productionize experiments and guide model selection based on deployment constraints. Work with DevOps to integrate ML models into CI/CD pipelines and cloud-native architecture. Qualifications we seek in you! Minimum Qualifications Bachelor’s or Master’s in Computer Science , Engineering, or a related field. Sound experience in ML engineering, with significant work in computer vision and model operations. Strong coding skills in Python and familiarity with scripting for automation. Hands-on experience with PyTorch , TensorFlow, OpenCV, and model lifecycle tools like MLflow , DVC, or SageMaker. Solid understanding of containerization and orchestration (Docker, Kubernetes). Experience with cloud services (AWS/GCP/Azure) for model deployment and storage. Preferred Qualifications: Experience with real-time video analytics or image-based inference systems. Knowledge of MLOps best practices (model registries, lineage, versioning). Familiarity with edge AI deployment and acceleration toolkits (e.g., TensorRT , DeepStream ). Exposure to CI/CD pipelines and modern DevOps tooling (Jenkins, GitLab CI, ArgoCD ). Contributions to open-source ML/CV tooling or experience with labeling workflows (CVAT, Label Studio). Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 16, 2025, 3:14:00 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 2 weeks ago

Apply

3.0 years

7 - 10 Lacs

Hyderābād

On-site

About the Job : Sanofi is a pioneering global healthcare company committed to advancing the miracles of science to enhance the well-being of individuals worldwide. Operating in over 100 countries, our dedicated team is focused on reshaping the landscape of medicine, transforming the seemingly impossible into reality. We strive to provide life-changing treatment options and life-saving vaccines, placing sustainability and social responsibility at the forefront of our aspirations. Embarking on an expansive digital transformation journey, Sanofi is committed to accelerating its data transformation and embracing artificial intelligence (AI) and machine learning (ML) solutions. This strategic initiative aims to expedite research and development, enhance manufacturing processes, elevate commercial performance, and deliver superior drugs and vaccines to patients faster, ultimately improving global health and saving lives. What you will be doing:: As a dynamic Data Science practitioner, you are passionate about challenging the status quo and ensuring the development and impact of Sanofi's AI solutions for the patients of tomorrow. You are an influential leader with hands-on experience deploying AI/ML and GenAI solutions, applying state-of-the-art algorithms with technically robust lifecycle management. Your keen eye for improvement opportunities and demonstrated ability to deliver solutions in cross-functional environments make you an invaluable asset to our team. Main Responsibilities: This role demands a dynamic and collaborative individual with a strong technical background, capable of leading the development and deployment of advanced machine learning while maintaining a focus on meeting business objectives and adhering to industry best practices. Key highlights include: Model Design and Development: Lead the development of custom Machine Learning (ML) and Large Language Model (LLM) components for both batch and stream processing-based AI ML pipelines. Create model components, including data ingestion, preprocessing, search and retrieval, Retrieval Augmented Generation (RAG), and fine-tuning, ensuring alignment with technical and business requirements. Develop and maintain full-stack applications that integrate ML models, focusing on both backend processes and frontend interfaces Collaborative Development: Work closely with data engineer, ML Ops, software engineers and other team Tech team members to collaboratively design, develop, and implement ML model solutions, fostering a cross-functional and innovative environment. Contribute to both backend and frontend development tasks to ensure seamless user experiences. Model Evaluation: Collaborate with other data science team members to develop, validate, and maintain robust evaluation solutions and tools for assessing model performance, accuracy, consistency, and reliability during development and User Acceptance Testing (UAT). Implement model optimizations to enhance system efficiency based on evaluation results. Model Deployment: Work closely with the MLOps team to facilitate the deployment of ML and Gen AI models into production environments, ensuring reliability, scalability, and seamless integration with existing systems. Contribute to the development and implementation of deployment strategies for ML and Gen AI models. Implement frontend interfaces to monitor and manage deployed models effectively. Internal Collaboration: Collaborate closely with product teams, business stakeholders, data science team members to ensure the smooth integration of machine learning models into production systems. Foster strong communication channels and cooperation across different teams for successful project outcomes. Problem Solving: Proactively troubleshoot complex issues related to machine learning model development and data pipelines. Innovatively develop solutions to overcome challenges, contributing to continuous improvement in model performance and system efficiency. Key Functional Requirements & Qualifications: Education and experience: PhD in mathematics, computer science, engineering, physics, statistics, economics, operation research or a related quantitative discipline with strong coding skills, OR Master’s Degree in relevant domain with 3+ years of data science experience Technical skills: Disciplined AI/ML development, including CI/CD and orchestration Cloud and high-performance computing proficiency (AWS, GCP, Databricks, Apache Spark). Experience deploying models in agile, product-focused environments Full-stack AI application expertise preferred, including experience with front-end frameworks (e.g., React) and backend technologies. Communication and collaboration: Excellent written and verbal communication A demonstrated ability to collaborate with cross-functional team (e.g. business, product and digital) Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Sanofi achieves its mission, in part, by offering rewarding career opportunities which inspire employee growth and development. Our 6 Recruitment Principles clarify our commitment to you and your role in driving your career. Our people are responsible for managing their career Sanofi posts all non-executive opportunities for our people We give priority to internal candidates Managers provide constructive feedback to all internal interviewed candidates We embrace diversity to hire best talent We expect managers to encourage career moves across the whole organization Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people.

Posted 2 weeks ago

Apply

2.0 years

10 Lacs

Gurgaon

On-site

Gurgaon, India We are seeking an Associate Consultant to join our India team based in Gurgaon. This role at Viscadia offers a unique opportunity to gain hands-on experience in the healthcare industry, with comprehensive training in core consulting skills such as critical thinking, market analysis, and executive communication. Through project work and direct mentorship, you will develop a deep understanding of healthcare business dynamics and build a strong foundation for a successful consulting career. ROLES AND RESPONSIBILITIES Technical Responsibilities Design and build full-stack forecasting and simulation platforms using modern web technologies (e.g., React, Node.js, Python) hosted on AWS infrastructure (e.g., Lambda, EC2, S3, RDS, API Gateway). Automate data pipelines and model workflows using Python for data preprocessing, time-series modeling (e.g., ARIMA, Exponential Smoothing), and backend services. Develop and enhance product positioning, messaging, and resources that support the differentiation of Viscadia from its competitors. Conduct research and focus groups to elucidate key insights that augment positioning and messaging Replace legacy Excel/VBA tools with scalable, cloud-native applications, integrating dynamic reporting features and user controls via web UI. Use SQL and cloud databases (e.g., AWS RDS, Redshift) to query and transform large datasets as inputs to models and dashboards. Develop interactive web dashboards using frameworks like React + D3.js or embed tools like Power BI/Tableau into web portals to communicate insights effectively. Implement secure, modular APIs and microservices to support modularity, scalability, and seamless data exchange across platforms. Ensure cost-effective and reliable deployment of solutions via AWS services, CI/CD pipelines, and infrastructure-as-code (e.g., CloudFormation, Terraform). Business Responsibilities Support the development and enhancement of forecasting and analytics platforms tailored to the needs of pharmaceutical clients across various therapeutic areas Build in depth understanding of pharma forecasting concepts, disease areas, treatment landscapes, and market dynamics to contextualize forecasting models and inform platform features Partner with cross-functional teams to ensure forecast deliverables align with client objectives, timelines, and decision-making needs Contribute to a culture of knowledge sharing and continuous improvement by mentoring junior team members and helping codify best practices in forecasting and business analytics Grow into a client-facing role, combining an understanding of commercial strategy with forecasting expertise to lead engagements and drive value for clients QUALIFICATIONS Bachelor’s degree (B.Tech/B.E.) from a premier engineering institute, preferably in Computer Science, Information Technology, Electrical Engineering, or related disciplines 2+ years of experience in full-stack development, with a strong focus on designing, developing, and maintaining AWS-based applications and services SKILLS & TECHNICAL PROFICIENCIES Technical Skills Proficient in Python, with practical experience using libraries such as pandas, NumPy, matplotlib/seaborn, and statsmodels for data analysis and statistical modeling Strong command of SQL for data querying, transformation, and seamless integration with backend systems Hands-on experience in designing and maintaining ETL/ELT data pipelines, ensuring efficient and scalable data workflows Solid understanding and applied experience with cloud platforms, particularly AWS; working familiarity with Azure and Google Cloud Platform (GCP) Full-stack web development expertise, including building and deploying modern web applications, web hosting, and API integration Proficient in Microsoft Excel and PowerPoint, with advanced skills in data visualization and delivering professional presentations Soft Skills Excellent verbal and written communication skills, with the ability to effectively engage both technical and non-technical stakeholders Strong analytical thinking and problem-solving abilities, with a structured and solution-oriented mindset Demonstrated ability to work independently as well as collaboratively within cross-functional teams Adaptable and proactive, with a willingness to thrive in a dynamic, fast-growing environment Genuine passion for consulting, with a focus on delivering tangible business value for clients Domain Expertise (Good to have) Strong understanding of pharmaceutical commercial models, including treatment journeys, market dynamics, and key therapeutic areas Experience working with and interpreting industry-standard datasets such as IQVIA, Symphony Health, or similar secondary data sources Familiarity with product lifecycle management, market access considerations, and sales performance tracking metrics used across the pharmaceutical value chain

Posted 2 weeks ago

Apply

0 years

0 Lacs

Delhi

On-site

We are seeking a motivated and detail-oriented AI Agent Developer Intern to join our technology team. This is an exciting opportunity to gain hands-on experience in the rapidly evolving field of artificial intelligence. The successful candidate will work closely with our senior developers to design, build, and refine AI agents that automate complex tasks and create intelligent solutions for our business needs. This role is ideal for a student or recent graduate who is passionate about AI and eager to apply their academic knowledge to real-world challenges. Key Responsibilities: Assist in the design, development, and deployment of AI-powered agents using Large Language Models (LLMs) and other machine learning frameworks. Develop, test, and refine prompts to ensure our AI agents produce accurate, relevant, and reliable outputs. Integrate AI agents with various internal systems, databases, and third-party APIs to enable seamless workflows. Participate in the entire model lifecycle, including data collection, preprocessing, training, and performance evaluation. Collaborate with the engineering and product teams to understand project requirements and translate them into technical specifications. Troubleshoot and debug issues with AI agents to ensure high performance and stability. Stay up-to-date with the latest advancements in generative AI, LLMs, and agent-based systems. Required Qualifications: Currently pursuing or recently completed a Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related technical field. Solid programming fundamentals with proficiency in Python. A strong, foundational understanding of AI and Machine Learning concepts. Familiarity with the concept of APIs and how to interact with them. Excellent problem-solving skills and the ability to work independently as well as in a team. Strong verbal and written communication skills. Preferred Skills and Experience: Hands-on experience with Large Language Models (e.g., GPT series, Llama, Claude) through projects or coursework. Experience with common AI/ML libraries and frameworks such as TensorFlow, PyTorch, or Scikit-learn. A portfolio of personal or academic projects that demonstrates an interest and capability in AI development. Basic understanding of Natural Language Processing (NLP) techniques. Familiarity with cloud computing platforms (e.g., Google Cloud, AWS, Azure) is a plus.

Posted 2 weeks ago

Apply

3.0 years

6 - 12 Lacs

Mohali

On-site

Job Description:- We’re looking for a Data Engineer to join our growing AI team! If you have experience building and managing data pipelines for AI and LLM (Large Language Model) applications, this is your chance to help shape real-world AI features across multiple products. Key Skills: Data Pipelines, Python, SQL, ETL, FAISS, Pinecone, Embeddings, LangChain Responsibilities: ● Build and manage ETL pipelines for integrating AI features ● Handle text preprocessing and embedding generation for LLM applications ● Manage and optimize vector databases like Pinecone or FAISS ● Collaborate with engineering and product teams to support AI deployment ● Ensure the delivery of high-quality, AI-ready datasets Requirements: ● 3+ years of experience with Python and SQL-based data processing ● Hands-on experience with embedding models or vector stores ● Familiarity with cloud platforms (AWS, GCP, Azure) and scalable data workflows ● Strong debugging, documentation, and problem-solving skills ● Must be available to work on-site full-time Job Type: Full-time Pay: ₹50,000.00 - ₹100,000.00 per month Benefits: Food provided Health insurance Provident Fund Work Location: In person

Posted 2 weeks ago

Apply

0 years

12 Lacs

Ahmedabad

On-site

Role Overview: We are seeking a highly skilled Data Scientist specializing in Generative AI to design, develop, and deploy state-of-the-art AI models solving real-world business challenges. You will work extensively with Large Language Models (LLMs), Generative Adversarial Networks (GANs), Retrieval-Augmented Generation (RAG) frameworks, and transformer architectures to create production-ready solutions across various domains. Key Responsibilities: Design, develop, and fine-tune advanced Generative AI models such as LLMs, GANs, and Diffusion models. Implement and enhance RAG and transformer-based architectures for contextual understanding and document intelligence. Customize and optimize LLMs for specific domain applications. Build, maintain, and optimize ML pipelines and infrastructure for model training, evaluation, and deployment. Collaborate with engineering teams to integrate AI models into user-facing applications. Stay updated with the latest trends and research in Generative AI, open-source frameworks, and tools. Analyze model outputs for quality and performance, ensuring adherence to ethical AI standards. Required Skills: Strong proficiency in Python and deep learning frameworks including TensorFlow, PyTorch, and HuggingFace Transformers. Deep understanding of GenAI architectures including LLMs, RAG, GANs, and Autoencoders. Experience with fine-tuning models using techniques such as LoRA, PEFT, or equivalents. Knowledge of vector databases (e.g., FAISS, Pinecone) and embedding generation methods. Experience handling datasets, preprocessing, and synthetic data generation. Solid grasp of NLP concepts, prompt engineering, and safe AI practices. Hands-on experience with API development, model deployment, and cloud platforms (AWS, GCP, Azure). Job Type: Full-time Pay: From ₹100,000.00 per month Benefits: Flexible schedule Leave encashment Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

JD : Senior Business Intelligence Partner Randstad Sourceright is a global talent leader, providing solutions and expertise that help companies position for growth, execute on strategy, and improve business agility. Our experience encompasses all facets of the talent acquisition of permanent employees and the contingent and contractor workforce. Key offerings include Managed Services Provider (MSP) programs, Recruitment Process Outsourcing (RPO), and Blended Workforce Solutions. Randstad Sourceright offers solutions Globally in North America, EMEA, and APAC. Working for a multi-country organization means working with clients and colleagues from different backgrounds. This results in a digital way of working and requires a proactive and culturally inclusive mindset. Purpose of the job : This role aims to partner with the Regional and Global Stakeholders within RSR (who can include but are not limited to Operational Teams, Data engineering, BI Support Teams, Internal teams, and RSR clients) and provide business solutions through data driven insights/recommendations. We're seeking someone with a flair for data storytelling to join our team. In this role, you'll transform complex data into compelling narratives that drive business decisions. This position has operational and technical responsibility for driving data insights, analytics, and consulting across all operating companies within RSR. This position will develop processes and strategies to consolidate, automate, and improve problem solving for external clients and internal stakeholders. As a Snr Business Intelligence Partner, you will oversee the end-to-end delivery of regional and global accounts with respect to data and analytics. This will include working with data engineering to provide usable datasets, creating dashboards with meaningful insights & visualizations within our BI solution (DOMO), and ongoing communication and partnering with the stakeholders. The key to this is that as a Business Intelligence Partner, you will be a commercial and operational expert who can translate data into insights. You will use this to mitigate risk, find operational and revenue-generating opportunities, and provide business solutions. Position Summary Consulting / Partnering (60%): ● Leverage data analytics to extract actionable insights from external client and internal RSR data, to help drive strategic business decisions ● Analyze the client’s data to present the account leadership and client with potential approaches to identify service delivery improvement opportunities and value-added initiatives ● Conduct in-depth data analysis to uncover revenue-generating opportunities, supporting account leadership with data-driven recommendations. ● Serve as a key liaison between clients, operations teams, and data engineering, translating business needs into technical requirements ● Actively collect and monitor stakeholder feedback to quantify Data and Analytics impact and ROI ● Collaborate with global teams to standardize data practices and share analytics best practices, along with coaching/mentoring junior team members ● Apply expertise in data visualization and storytelling to effectively communicate complex insights to stakeholders ● Build and maintain client/stakeholder relationships through consistent improvement and delivery of high-value data analytics ● Provide thought leadership in business intelligence and data analytics methodologies ● Liaise with client program owners and ATS Vendors/Partners to optimize data integration and reporting capabilities Implementation of BI solutions (30%): ● Be the implementation workstream lead responsible for delivering a reporting suite and introductory standard for the account and client. ● Provide recommendations on organizing the same concept offerings to appropriate ‘tier-based’ programs currently licensed for DOMO/Talent Radar. (tier-based program requirements TBD). Responsible for ensuring all contractual conditions around reporting, financial elements, SLA's, and KPI's are reviewed, understood, and adhered to ● Assess operating reality to validate solution requirements and adjust as needed. ● Implement and reinforce tools, including standards, procedures, and documentation. ● Ensure the team has living documentation on reporting requirements, data processes, and other program-specific content. ● Oversee the design and build of insightful, scalable, and actionable visualizations. ● Develop, document, and maintain a comprehensive quality management program ● Build, implement, and schedule production standards of files and reports to ensure accuracy and timeliness. ● Ensure reporting quality and accuracy by evaluating, integrating, and complementing data sources. ● Execute continuous process improvement process. ● Knowledge of general database functions, data storage, data models, SQL, and data transfer protocols Maintain Standards (10%): ● Work within the agreed parameters aligned to the global method of BI production ● Leverage an agile work environment to incorporate new innovative and value components into existing portfolios ● Organize a feedback standard from stakeholders (internal and external) in order to create use cases, case study materials and other customer-facing material (coordinate with RFP and sales approach as appropriate) ● Promote available resources and enterprise tools for operations training/ coaching needs from a BI perspective. ● Ensure data governance principles and guidelines are being met Education ● Bachelor’s degree preferred (EMEA), mandatory (APAC) ● Master’s degree a plus Mandatory Experience ● 8+ years experience in data analytics and/or consulting in a delivery environment ● Experience with DOMO (or other visualization tools such as Tableau, Spotfire, PowerBI) and GCP ● Management Information/Business Intelligence/Analytics background is essential ● Project Management and/or process improvement experience ● Talent Acquisition/MSP/RPO/People Analytics experience Preferred Experience: ● Advanced Analytics/Data science/AI/ML experience preferred Knowledge, Skills, and Abilities: ● Technical Skills: ○ Data Visualization: ■ Expertise in data visualization tools such as DOMO, Tableau, Spotfire, Qlikview, Power BI - understanding of data visualization methods including usage of various chart types and dashboarding ○ Data Manipulation: ■ Proficiency in data cleansing and preprocessing techniques to ensure data quality ■ Strong SQL skills for data extraction, transformation, and analysis ■ Proficiency in advanced Excel, including macros, pivot tables, and data modeling ■ Knowledge of data governance principles and best practices for maintaining data integrity ○ Analytics: ■ Knowledge of key descriptive analytics techniques including but not limited to data summarization (central tendency, dispersion etc.), data visualization methods, time series analysis, correlation analysis, cross- tabulation, frequency distributions, Pareto analysis, cohort analysis, funnel analysis, RFM analysis, exploratory data analysis, and basic cluster analysis for segmentation ■ Proficiency in data mining techniques to uncover patterns and trends in large datasets ■ Experience with predictive modeling for forecasting and decision support, knowledge of A/B testing methodologies to drive continuous improvement, and the ability to apply statistical techniques to inform strategy and investment decisions is a plus ● Client-facing Experience: ○ Communication: ■ Excellent communication and facilitation skills required for various levels throughout external and internal organizational stakeholders ○ Presentation & Storytelling: ■ Expertise in narrative structures and audience analysis, so as to create compelling data narratives that aid driving business decisions ■ Excellent presentation skills used in delivering solutions to senior leadership and client stakeholders ■ Skilled in translating data insights into clear, non-technical language for diverse stakeholders ■ Demonstrated continuous improvement, process documentation, and workflow skills

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Surat, Gujarat, India

On-site

About The Position We are looking for a highly motivated and enthusiastic AI/ML Engineer to join our growing tech team. This role is ideal for the 6-month intern or individuals with up to 1 year of experience who have a strong interest in artificial intelligence and machine learning. You will be part of a collaborative environment where you can work on real-world projects, contribute to product innovation, and develop your skills in a fast-paced, supportive workplace. What you will do Assist in designing and implementing machine learning models and algorithms for various use cases. Support data collection, preprocessing, cleaning, and exploratory data analysis. Collaborate with the technical team to train, test, and validate ML models. Participate in building prototypes and deploying models into production environments. Contribute to the research and application of AI concepts such as NLP, computer vision, or predictive analytics. Work with tools and frameworks like Python, Scikit-learn, TensorFlow, Keras, or PyTorch. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with developers and analysts to understand project goals and deliver AI-driven solutions. Requirements Strong foundation in Python and basic understanding of ML libraries (Scikit-learn, NumPy, Pandas). Knowledge of machine learning concepts such as regression, classification, clustering, etc. Familiarity with deep learning basics and frameworks like TensorFlow or PyTorch is a plus. Understanding of data preprocessing and feature engineering. Good problem-solving and analytical skills. Ability to work independently and within a team. Eagerness to learn and explore new technologies in AI/ML. Internship or academic project experience in ML/AI is a plus. Perks & Benefits Competitive salary packages Opportunities for professional development and AI/ML certifications Company strength of 120+ 12 Paid Leaves + Festival Holidays Friendly and supportive work environment On-time Salary Monthly Refreshment Activities Career Growth Opportunity Celebrations (Birthday, Festivals, Picnic, Movie, Lunch, Dinner, and more) Educational Qualifications A bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field is preferred. We're dedicated to fostering a joyful workplace and are committed to offering equal opportunities for all. All eligible candidates will be considered for employment without discrimination based on race, color, ancestry, or religion. We prioritize a comprehensive range of company perks and benefits to support our diverse and inclusive team.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Tech Stack Backend - Python/Node/Nest Frontend - Angular/React Location - Open Roles and Responsibilities: We are seeking a highly skilled Full Stack Lead with Python skills and a strong background in Machine Learning to design, develop, and implement cutting-edge solutions using unsupervised learning methods. The ideal candidate will have hands-on experience with various ML algorithms and frameworks, and a solid understanding of data processing and analysis. Key Responsibilities: Design and implement scalable web applications and platforms using technologies such as Typescript, NestJS, Angular, NodeJS, ExpressJS, TypeORM, and Postgres Good understanding of web and REST API design patterns Experience with AWS technologies such as EKS, ECS, ECR, Fargate, EC2, Lambda, ALB will be an added advantage Hands-on experience with unit test frameworks like Jest Good working knowledge of JIRA, Confluence, Git Basic knowledge of Kubernetes and Terraform for infrastructure as code Basic knowledge of Docker compose and Docker Strong understanding of microservices architecture and ability to implement components independently Proven track record of problem-solving skills Excellent communication skills Develop, test, and maintain robust Python code for machine learning applications. Implement unsupervised learning algorithms to uncover hidden patterns and insights from large datasets. Collaborate with cross-functional teams to gather requirements and deliver scalable ML solutions. Perform data preprocessing, feature extraction, and data augmentation to enhance model performance. Design and conduct experiments to validate and optimize unsupervised learning models. Utilize ML libraries and frameworks (e.g., Scikit-learn, TensorFlow, PyTorch) to build and deploy models. Document and present findings and insights to stakeholders in a clear and concise manner. Stay updated with the latest advancements in machine learning and AI technologies. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. Proven experience as a Python Developer, with a strong portfolio of ML projects. In-depth knowledge of unsupervised learning techniques (e.g., clustering, anomaly detection, dimensionality reduction). Proficiency in Python and ML libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Experience with data preprocessing and transformation techniques. Strong understanding of statistics, probability, and linear algebra. Ability to work independently and as part of a team in a fast-paced environment. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and verbal. Preferred Qualifications: Experience with big data technologies (e.g., Hadoop, Spark). Familiarity with cloud platform like AWS and deploying ML models in production. Knowledge of deep learning architectures and frameworks. Prior experience in a relevant industry (e.g., healthcare, finance, retail). Publications or contributions to open-source ML projects. Primary Skill: Full stack + Python + AI/ML

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Exp: 2+ years Job Responsibilities Undertake data collection, preprocessing, and analysis Build models to address business problems Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Develop machine learning algorithms Conduct data-driven experiments to drive business decisions Required Skills Strong experience in machine learning and operations research with team leadership capabilities Proficiency in R , SQL , and Python Hands-on experience with business intelligence tools (e.g., Tableau , Power BI ) Strong math skills (e.g., statistics, algebra) Analytical mindset and solid business acumen Excellent communication and presentation skills Preferred Algorithms and Use Cases Text Analytics : Experience with Natural Language Processing (NLP) algorithms for sentiment analysis, text classification, named entity recognition Predictive Modeling : Familiarity with various models including regression, classification, and time series forecasting methods such as ARIMA , SARIMA , and Prophet

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Responsibilities Analyze raw data: assessing quality, cleansing, structuring for downstream processing Design accurate and scalable prediction algorithms Collaborate with engineering team to bring analytical prototypes to production Generate actionable insights for business improvements "Statistical Modeling: Develop and implement core statistical models, including linear and logistic regression, decision trees, and various classification algorithms. Analyze and interpret model outputs to inform business decisions. Advanced NLP: Work on complex NLP tasks, including data cleansing, text preprocessing, and feature engineering. Develop models for text classification, sentiment analysis, and entity recognition. LLM Integration: Design and optimize pipelines for integrating Large Language Models (LLMs) into applications, with a focus on Retrieval-Augmented Generation (RAG) systems. Work on fine-tuning LLMs to enhance their performance on domain-specific tasks. ETL Processes: Design ETL (Extract, Transform, Load) processes to ensure that data is accurately extracted from various sources, transformed into usable formats, and loaded into data warehouses or databases for analysis. BI Reporting and SQL: Collaborate with BI teams to ensure that data pipelines support efficient reporting. Write complex SQL queries to extract, analyze, and visualize data for business intelligence reports. Ensure that data models are optimized for reporting and analytics. Data Storage and Management: Collaborate with data engineers to design and implement efficient storage solutions for structured datasets and semi structured text datasets. Ensure that data is accessible, well-organized, and optimized for retrieval. Model Evaluation and Optimization: Regularly evaluate models using appropriate metrics and improve them through hyperparameter tuning, feature selection, and other optimization techniques. Deploy models in production environments and monitor their performance. Collaboration: Work closely with cross-functional teams, including software engineers, data engineers, and product managers, to integrate models into applications and ensure they meet business requirements. Innovation: Stay updated with the latest advancements in machine learning, NLP, and data engineering. Experiment with new algorithms, tools, and frameworks to continuously enhance the capabilities of our models and data processes." Qualifications Overall Experience: 5+ years of overall experience working in a modern software engineering environment with exposure to best practices in code management, devops and cloud data/ML engineering. Proven track record of developing and deploying machine learning models in production. ML Experience: 3+ years of experience in machine learning engineering, data science with a focus on fundamental statistical modeling. Experience in feature engineering, basic model tuning and understanding model drift over time. Strong foundations in statistics for applied ML. Data Experience: 1+ year(s) in building data engineering ETL processes, and BI reporting. NLP Experience: 1+ year(s) of experience working on NLP use cases, including large scale text data processing, storage and fundamental NLP models for text classification, topic modeling and/or more recently, LLM models and their applications Core Technical Skills: Proficiency in Python and relevant ML and/or NLP specific libraries. Strong SQL skills for data querying, analysis, and BI reporting. Experience with ETL tools and data pipeline management. BI Reporting: Experience in designing and optimizing data models for BI reporting, using tools like Tableau, Power BI, or similar. Education: Bachelor’s or Master’s degree in Computer Science / Data Science,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Data Engineering Architect, develop, and maintain highly scalable and robust data pipelines using Apache Kafka, Apache Spark, and Apache Airflow. Design and optimize data storage solutions, including Amazon Redshift, S3, or comparable platforms, to support large-scale analytics. Ensure data quality, integrity, security, and compliance across all data platforms. Data Science Design, develop, and deploy sophisticated machine learning models to solve complex business challenges. Build and optimize end-to-end ML pipelines, including data preprocessing, feature engineering, model training, and deployment. Drive Generative AI initiatives, creating innovative products and solutions. Conduct in-depth analysis of large, complex datasets to generate actionable insights and recommendations. Work closely with cross-functional stakeholders to understand business requirements and deliver data-driven strategies. Collaboration and Mentorship Mentor and guide junior data scientists and data engineers, fostering a culture of continuous learning and professional growth. Contribute to the development of best practices in Data Science, Data Engineering, and Generative AI. General Write clean, efficient, and maintainable code in Python and at least one other language (e.g., C#, Go, or equivalent). Participate in code reviews, ensuring adherence to best practices and coding standards. Stay abreast of the latest industry trends, tools, and technologies in Data Engineering, Data Science, and Generative AI. Document processes, models, and workflows to ensure knowledge sharing and reproducibility.

Posted 2 weeks ago

Apply

6.0 - 14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Coditas Solutions is seeking a highly skilled and motivated Lead Data Scientist to join our dynamic team. As a Data Scientist, you will play a key role in designing, implementing, and optimizing machine learning models and algorithms to solve complex business challenges. If you have a passion for leveraging AI and ML technologies to drive innovation, this is an exciting opportunity to contribute to groundbreaking projects. Roles and Responsibilities ● Lead the design, implementation, and optimization of machine learning and AI models using Python and R. ● Develop and deploy scalable predictive models and decision-making systems for business-critical applications. ● Conduct advanced exploratory data analysis (EDA) to extract actionable insights from structured and unstructured data. ● Collaborate with data engineers to ensure high-quality, well-structured datasets for training and inference. ● Own the end-to-end model lifecycle, from development to deployment, monitoring, and continuous improvement. ● Guide the integration of AI/ML models into enterprise-level production systems in collaboration with software engineering teams. ● Provide technical leadership, mentoring junior data scientists and driving best practices in ML model development and deployment. ● Stay ahead of the curve by evaluating and implementing cutting-edge AI/ML advancements, including LLMs and GenAI models. ● Work closely with stakeholders, product teams, and clients to define business challenges and design AI-driven solutions. ● Drive model interpretability, explainability, and responsible AI practices within the organization. Technical Skills ● 6-14 years of hands-on experience in designing and implementing machine learning, deep learning, and AI solutions . ● Strong programming expertise in Python and R , with proficiency in implementing complex algorithms. ● Extensive experience with cloud platforms (AWS, Azure, GCP) for deploying scalable machine learning solutions. ● Proficiency in MLOps practices , ensuring robust model deployment, monitoring, and retraining workflows. ● Strong background in classical ML algorithms (e.g., linear regression, logistic regression, decision trees, random forests, SVM) and deep learning architectures (CNNs, RNNs, Transformers). ● Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow, PyTorch, Keras, NLTK, OpenCV . ● Expertise in data preprocessing, feature engineering, and model evaluation for structured and unstructured data. ● Proven experience in selecting and engineering relevant features to enhance model performance. ● Understanding of LLMs (Large Language Models) and GenAI technologies , with the ability to optimize their usage for enterprise applications. ● Strong problem-solving, critical thinking, and analytical skills to drive AI/ML innovations. ● Exceptional communication and leadership skills , with experience in mentoring teams and collaborating with cross-functional stakeholders.

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Title: Data Scientist Location: Gurugram Experience: 3–4 Years Job Type: Full-Time Company: Sequifi | www.sequifi.com About the Role We are hiring a Data Scientist with 3–4 years of experience who has a strong foundation in data analysis, machine learning, and business problem-solving. This role is ideal for someone who is hands-on with modern tools and techniques, eager to explore new technologies, and enjoys working in a collaborative, fast-paced environment. Key Responsibilities Analyze complex datasets to uncover patterns, trends, and actionable insights. Build, validate, and deploy machine learning models for predictive analytics, classification, and clustering. Design and maintain efficient data pipelines and ETL processes. Create clear, interactive dashboards and reports using tools such as Power BI, Tableau, or Python visualization libraries. Collaborate with product managers, developers, and business analysts to understand requirements and deliver data-driven solutions. Conduct A/B testing and statistical analysis to support product decisions and optimizations. Continuously improve model performance based on feedback and business objectives. Required Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, or a related field. 3–4 years of experience in data science or a similar role. Strong programming skills in Python (Pandas, NumPy, Scikit-learn), SQL, and familiarity with PHP or Node.js. Hands-on experience with data visualization tools such as Power BI or Tableau. Good understanding of machine learning algorithms, data preprocessing, and feature engineering. Experience working with structured and unstructured data, including NoSQL databases like MongoDB. Familiarity with cloud platforms such as AWS, GCP, or Azure is a plus. Must have Industry exposure of B2B SAAS in the IT Sector Soft Skills Strong analytical and problem-solving abilities. Effective communication and data storytelling skills. Ability to work independently as well as collaboratively in cross-functional teams. A mindset geared toward innovation, learning, and adaptability. Why Join Us Work on meaningful and challenging problems in a tech-focused environment. Join a young, supportive, and fast-moving team. Gain exposure to a combination of data science, product, and engineering. Opportunity to learn and grow continuously in a culture of innovation. If you’re passionate about using data to drive business impact, we’d love to hear from you. Apply now and grow with us at Sequifi. Budget : Upto 12 LPA Work Mode : Onsite, Monday to Friday

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

We are seeking a skilled and innovative Machine Learning Engineer with over 5 years of experience to design, develop, and deploy scalable ML solutions. You will work closely with data scientists, software engineers, and product teams to solve real-world problems using state-of-the-art machine learning and deep learning techniques. Key Responsibilities Design, build, and optimize machine learning models and pipelines for classification, regression, clustering, recommendation, and forecasting Implement and fine-tune deep learning models using frameworks like TensorFlow, PyTorch, or Keras Collaborate with cross-functional teams to understand business problems and convert them into technical solutions Develop data preprocessing, feature engineering, and model evaluation strategies Build and deploy models into production using CI/CD practices and MLOps tools Monitor model performance and retrain as necessary to ensure accuracy and reliability Create and maintain technical documentation, and provide knowledge sharing within the team Stay updated on the latest research, tools, and techniques in machine learning and AI Required Skills & Experience 5+ years of experience in Machine Learning Engineering or Applied Data Science Proficiency in Python and ML libraries such as scikit-learn, pandas, NumPy, TensorFlow, or PyTorch Solid understanding of mathematics, statistics, and ML/DL algorithms Experience with end-to-end ML lifecycle from data collection and cleaning to model deployment and monitoring Strong knowledge of SQL and working with large datasets Experience deploying ML models on cloud platforms (e.g., AWS, Azure, GCP) Familiarity with Docker, Kubernetes, MLflow, or other MLOps tools Good understanding of REST APIs, microservices, and backend integration Nice To Have Exposure to NLP, Computer Vision, or Generative AI techniques Experience with big data technologies like Spark, Hadoop, or Hive Working knowledge of data labeling, AutoML, or active learning Experience with feature stores, model registries, or streaming data (Kafka, Flink) Educational Qualification Bachelors or Masters degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field Additional certifications in AI/ML are a plus (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

About The Role As an ideal candidate, you must be a problem solver with solid experience and knowledge in AI/ML development. You must be passionate in understanding the business context for features built to drive better customer experience and adoption. Requirements 3-4 years of professional experience in AI/ML development, with a strong focus on Computer Vision techniques such as CNNs, object detection, segmentation, and image processing. Proficient in Python and deep learning frameworks like TensorFlow and PyTorch. Solid understanding of classical and modern machine learning algorithms (supervised, unsupervised, reinforcement learning). Experience with data preprocessing, annotation, and augmentation techniques for image datasets. Familiarity with model evaluation metrics, hyperparameter tuning, and optimization strategies. Practical experience deploying ML models in cloud environments (AWS, GCP, Azure) or on edge devices. Knowledge of MLOps practices, including CI/CD pipelines, model versioning, and monitoring. Strong debugging and problem-solving skills, with the ability to work effectively in cross-functional teams. Understanding of software engineering best practices, including version control (Git), testing, and documentation. Experience with containerization technologies like Docker; familiarity with Kubernetes is a plus. Comfortable working with SQL/NoSQL databases for data storage and retrieval. Preferred Skills Experience with model compression and optimization techniques such as quantization and pruning for deployment efficiency. Exposure to other AI domains like NLP or time series analysis is a bonus. Strong communication skills to explain complex technical concepts to diverse audiences. (ref:hirist.tech)

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Responsibilities Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git. Tech Skills (10+ Years Experience): Machine Learning (ML) & Deep Learning Solid understanding of supervised and unsupervised learning. Proficiency with deep learning architectures like Transformers, LSTMs, RNNs, etc. Generative AI: Hands-on experience with models such as OpenAI GPT4, Anthropic Claude, LLama etc. Knowledge of fine-tuning and optimizing large language models (LLMs) for specific tasks. Natural Language Processing (NLP): Expertise in NLP techniques, including text preprocessing, tokenization, embeddings, and sentiment analysis. Familiarity with NLP tasks such as text classification, summarization, translation, and question-answering. Retrieval-Augmented Generation (RAG): In-depth understanding of RAG pipelines, including knowledge retrieval techniques like dense/sparse retrieval. Experience integrating generative models with external knowledge bases or databases to augment responses. Data Engineering: Ability to build, manage, and optimize data pipelines for feeding large-scale data into AI models. Search and Retrieval Systems: Experience with building or integrating search and retrieval systems, leveraging knowledge of Elasticsearch, AI Search, ChromaDB, PGVector etc. Prompt Engineering: Expertise in crafting, fine-tuning, and optimizing prompts to improve model output quality and ensure desired results. Understanding how to guide large language models (LLMs) to achieve specific outcomes by using different prompt formats, strategies, and constraints. Knowledge of techniques like few-shot, zero-shot, and one-shot prompting, as well as using system and user prompts for enhanced model performance. Programming & Libraries: Proficiency in Python and libraries such as PyTorch, Hugging Face, etc. Knowledge of version control (Git), cloud platforms (AWS, GCP, Azure), and MLOps tools. Database Management: Experience working with SQL and NoSQL databases, as well as vector databases APIs & Integration: Ability to work with RESTful APIs and integrate generative models into applications. Evaluation & Benchmarking: Strong understanding of metrics and evaluation techniques for generative models. (ref:hirist.tech)

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurgaon Rural, Haryana, India

On-site

We are seeking an Full Stack Developer (Associate Consultant) to join our India team based in Gurgaon. This role at Viscadia offers a unique opportunity to gain hands-on experience in the healthcare industry, with comprehensive training in core consulting skills such as critical thinking, market analysis, and executive communication. Through project work and direct mentorship, you will develop a deep understanding of healthcare business dynamics and build a strong foundation for a successful consulting career. ROLES AND RESPONSIBILITIES Technical Responsibilities Design and build full-stack forecasting and simulation platforms using modern web technologies (e.g., React, Node.js, Python) hosted on AWS infrastructure (e.g., Lambda, EC2, S3, RDS, API Gateway). Automate data pipelines and model workflows using Python for data preprocessing, time-series modeling (e.g., ARIMA, Exponential Smoothing), and backend services. Replace legacy Excel/VBA tools with scalable, cloud-native applications, integrating dynamic reporting features and user controls via web UI. Use SQL and cloud databases (e.g., AWS RDS, Redshift) to query and transform large datasets as inputs to models and dashboards. Develop interactive web dashboards using frameworks like React + D3.js or embed tools like Power BI/Tableau into web portals to communicate insights effectively. Implement secure, modular APIs and microservices to support modularity, scalability, and seamless data exchange across platforms. Ensure cost-effective and reliable deployment of solutions via AWS services, CI/CD pipelines, and infrastructure-as-code (e.g., CloudFormation, Terraform). Business Responsibilities Support the development and enhancement of forecasting and analytics platforms tailored to the needs of pharmaceutical clients across various therapeutic areas Build in depth understanding of pharma forecasting concepts, disease areas, treatment landscapes, and market dynamics to contextualize forecasting models and inform platform features Partner with cross-functional teams to ensure forecast deliverables align with client objectives, timelines, and decision-making needs Contribute to a culture of knowledge sharing and continuous improvement by mentoring junior team members and helping codify best practices in forecasting and business analytics Grow into a client-facing role, combining an understanding of commercial strategy with forecasting expertise to lead engagements and drive value for clients QUALIFICATIONS Bachelor’s degree (B.Tech/B.E.) from a premier engineering institute, preferably in Computer Science, Information Technology, Electrical Engineering, or related disciplines 2+ years of experience in full-stack development, with a strong focus on designing, developing, and maintaining AWS-based applications and services SKILLS AND TECHNICAL PROFICIENCIES Technical Skills Proficient in Python, with practical experience using libraries such as pandas, NumPy, matplotlib/seaborn, and statsmodels for data analysis and statistical modeling Strong command of SQL for data querying, transformation, and seamless integration with backend systems Hands-on experience in designing and maintaining ETL/ELT data pipelines, ensuring efficient and scalable data workflows Solid understanding and applied experience with cloud platforms, particularly AWS; working familiarity with Azure and Google Cloud Platform (GCP) Full-stack web development expertise, including building and deploying modern web applications, web hosting, and API integration Proficient in Microsoft Excel and PowerPoint, with advanced skills in data visualization and delivering professional presentations Soft Skills Excellent verbal and written communication skills, with the ability to effectively engage both technical and non-technical stakeholders Strong analytical thinking and problem-solving abilities, with a structured and solution-oriented mindset Demonstrated ability to work independently as well as collaboratively within cross-functional teams Adaptable and proactive, with a willingness to thrive in a dynamic, fast-growing environment Genuine passion for consulting, with a focus on delivering tangible business value for clients Domain Expertise Strong understanding of pharmaceutical commercial models, including treatment journeys, market dynamics, and key therapeutic areas Experience working with and interpreting industry-standard datasets such as IQVIA, Symphony Health, or similar secondary data sources Familiarity with product lifecycle management, market access considerations, and sales performance tracking metrics used across the pharmaceutical value chain

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You should have hands-on experience in Machine Learning Model and deep learning development using python. You will be responsible for Data Quality Analysis and Data preparation, Exploratory Data Analysis, and visualization of data. Additionally, you will define validation strategies, preprocessing or feature engineering on a given dataset, and data augmentation pipelines. Text processing using Machine Learning and NLP for processing documents will also be part of your role. Your tasks will include training models, tuning their hyperparameters, analyzing model errors, and designing strategies to overcome them. You should have experience with python packages such as Numpy, Scipy, Scikit-learn, Theano, TensorFlow, Keras, PyTorch, Pandas, and Matplotlib. Experience in working on Azure open AI studio or openai using python or LLAMA or Langchain is required. Moreover, experience in working on Azure function and python flask/api development/streamlit, prompt engineering, conversational AI, and LLM models like word2Vec, Glove, spacy, BERT embedding models is preferred. You are expected to possess distinctive problem-solving, strategic, and analytical capabilities, as well as excellent time-management and organization skills. Strong knowledge in Programming languages like Python, reactjs, SQL, big data is essential. Excellent verbal and written communication skills are necessary for effective interaction between business and technical architects and developers. You should have 2 - 4 years of relevant experience and a Bachelors Degree in Computer Science, Computer Engineering, masters in computer application, MIS, or a related field. End-to-End development experience in deployment of the Machine Learning model using python and Azure ML studio is required. Exposure in developing client-based or web-based software solutions and Certification of Machine Learning and Artificial Intelligence will be beneficial. Good to have experience in power platform or power pages or Azure OpenAI studio. Grant Thornton INDUS comprises GT U.S. Shared Services Center India Pvt Ltd and Grant Thornton U.S. Knowledge and Capability Center India Pvt Ltd. Grant Thornton INDUS is the shared services center supporting the operations of Grant Thornton LLP, the U.S. member firm of Grant Thornton International Ltd. Established in 2012, Grant Thornton INDUS employs professionals across a wide range of disciplines including Tax, Audit, Advisory, and other operational functions. The culture at Grant Thornton INDUS promotes empowered people, bold leadership, and distinctive client service. Working at Grant Thornton INDUS offers an opportunity to be part of something significant and serves communities in India through inspirational and generous services to give back to the communities they work in. Grant Thornton INDUS has its offices in two locations in India Bengaluru and Kolkata.,

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Job Sanofi is a pioneering global healthcare company committed to advancing the miracles of science to enhance the well-being of individuals worldwide. Operating in over 100 countries, our dedicated team is focused on reshaping the landscape of medicine, transforming the seemingly impossible into reality. We strive to provide life-changing treatment options and life-saving vaccines, placing sustainability and social responsibility at the forefront of our aspirations. Embarking on an expansive digital transformation journey, Sanofi is committed to accelerating its data transformation and embracing artificial intelligence (AI) and machine learning (ML) solutions. This strategic initiative aims to expedite research and development, enhance manufacturing processes, elevate commercial performance, and deliver superior drugs and vaccines to patients faster, ultimately improving global health and saving lives. What you will be doing:: As a dynamic Data Science practitioner, you are passionate about challenging the status quo and ensuring the development and impact of Sanofi's AI solutions for the patients of tomorrow. You are an influential leader with hands-on experience deploying AI/ML and GenAI solutions, applying state-of-the-art algorithms with technically robust lifecycle management. Your keen eye for improvement opportunities and demonstrated ability to deliver solutions in cross-functional environments make you an invaluable asset to our team. Main Responsibilities This role demands a dynamic and collaborative individual with a strong technical background, capable of leading the development and deployment of advanced machine learning while maintaining a focus on meeting business objectives and adhering to industry best practices. Key highlights include: Model Design and Development: Lead the development of custom Machine Learning (ML) and Large Language Model (LLM) components for both batch and stream processing-based AI ML pipelines. Create model components, including data ingestion, preprocessing, search and retrieval, Retrieval Augmented Generation (RAG), and fine-tuning, ensuring alignment with technical and business requirements. Develop and maintain full-stack applications that integrate ML models, focusing on both backend processes and frontend interfaces Collaborative Development: Work closely with data engineer, ML Ops, software engineers and other team Tech team members to collaboratively design, develop, and implement ML model solutions, fostering a cross-functional and innovative environment. Contribute to both backend and frontend development tasks to ensure seamless user experiences. Model Evaluation: Collaborate with other data science team members to develop, validate, and maintain robust evaluation solutions and tools for assessing model performance, accuracy, consistency, and reliability during development and User Acceptance Testing (UAT). Implement model optimizations to enhance system efficiency based on evaluation results. Model Deployment: Work closely with the MLOps team to facilitate the deployment of ML and Gen AI models into production environments, ensuring reliability, scalability, and seamless integration with existing systems. Contribute to the development and implementation of deployment strategies for ML and Gen AI models. Implement frontend interfaces to monitor and manage deployed models effectively. Internal Collaboration: Collaborate closely with product teams, business stakeholders, data science team members to ensure the smooth integration of machine learning models into production systems. Foster strong communication channels and cooperation across different teams for successful project outcomes. Problem Solving: Proactively troubleshoot complex issues related to machine learning model development and data pipelines. Innovatively develop solutions to overcome challenges, contributing to continuous improvement in model performance and system efficiency. Key Functional Requirements & Qualifications Education and experience: PhD in mathematics, computer science, engineering, physics, statistics, economics, operation research or a related quantitative discipline with strong coding skills, OR Master’s Degree in relevant domain with 3+ years of data science experience Technical skills: Disciplined AI/ML development, including CI/CD and orchestration Cloud and high-performance computing proficiency (AWS, GCP, Databricks, Apache Spark). Experience deploying models in agile, product-focused environments Full-stack AI application expertise preferred, including experience with front-end frameworks (e.g., React) and backend technologies. Communication and collaboration: Excellent written and verbal communication A demonstrated ability to collaborate with cross-functional team (e.g. business, product and digital) Why Choose Us? Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Sanofi achieves its mission, in part, by offering rewarding career opportunities which inspire employee growth and development. Our 6 Recruitment Principles clarify our commitment to you and your role in driving your career. Our people are responsible for managing their career Sanofi posts all non-executive opportunities for our people We give priority to internal candidates Managers provide constructive feedback to all internal interviewed candidates We embrace diversity to hire best talent We expect managers to encourage career moves across the whole organization Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people. null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Title : Full stack LLM / Backend engineer Remote About The Company: Web 3.0 is the next wave of technology breakthroughs to revolutionize human society, with key pillars being decentralized finance (DeFi), NFTs, Metaverse, and decentralized autonomous organization (DAO). The space is growing at breakneck speed with enormous growth potential in years to come. M0 is a Generative Ai, blockchain and web3 technology company exploring big ideas in identity, ownership, utility, DeFi, and interoperability to push the crypto, NFT, and Metaverse space forward. We are looking for the best builders, hackers, innovators, entrepreneurs, visionaries & creatives to come design the future together. This isn't a typical job at M0 you will be challenged to bring out the best version of yourself in bringing to life the creative renaissance of Web3. We are seeking driven individuals with a hunger for product ownership and impact and the potential for asymmetrical upside in doing so. If you don't know about Web3 yet - it's all good, you're about to define it. Requirements: • Hands-on experience in developing LLM-based applications for both front-end and back-end development using JavaScript, Typescript, Nest and Next js. • Hands-on experience on design, and optimizing LLM, RAGS, natural language processing (NLP) systems, frameworks, and tools. • Hands-on AI/ML modelling experience of complex datasets combined with a strong understanding of the theoretical foundations of AI/ML both short term and long-term memory-based models. • Expertise in most of the following areas: supervised & unsupervised learning, deep learning, reinforcement learning, federated learning, time series forecasting, Bayesian statistics, and optimization. • Experience in creating and deploying code libraries using functions and classes in Python and JavaScript MERN AI product-focused development. • Comfortable working in the cloud and high-performance computing environments (eg, vector DBs, MLops, AWS/Azure/ GCP, ELK). • Well-versed in software development and code quality best industry practices. • Experience working with Agile Development methodologies. Responsibilities: - Prototypes and do proof of concepts (PoC) in one or all the following areas: LLMS. -Data Analysis and Preprocessing: Collect and preprocess large-scale text corpora to train and fine-tune language models. Conduct data analysis to identify patterns, trends, and insights that can inform model development and improvement.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Vyva Consulting Inc. is a trusted partner in Sales Performance Management (SPM) and Incentive Compensation Management (ICM), specializing in delivering top-tier software consulting solutions. We help organizations optimize their sales operations, boost revenue, and maximize value. Our seasoned experts work with leading products such as Xactly, Varicent, and SPIFF, offering comprehensive implementation and post-implementation services. We focus on enhancing sales compensation strategies to drive business success. Role Description This is a full-time, on-site role for an Artificial Intelligence Intern, located in Hyderabad. We are seeking a motivated AI Engineer Intern to join our team and contribute to cutting-edge AI/ML projects. This internship offers hands-on experience with large language models, generative AI, and modern AI frameworks while working on real-world applications that impact our business objectives. What You'll DoCore Responsibilities LLM Integration & Development: Build and prototype LLM-powered features using frameworks like LangChain, OpenAI SDK, or similar tools for content automation and intelligent workflows RAG System Implementation: Design and optimize Retrieval-Augmented Generation systems including document ingestion, chunking strategies, embedding generation, and vector database integration Data Pipeline Development: Create robust data pipelines for AI/ML workflows, including data collection, cleaning, preprocessing, and annotation of large datasets Model Experimentation: Conduct experiments to evaluate, fine-tune, and optimize AI models for accuracy, performance, and scalability across different use cases Vector Database Operations: Implement similarity search solutions using vector databases (FAISS, Pinecone, Chroma) for intelligent Q&A, content recommendation, and context-aware responses Prompt Engineering: Experiment with advanced prompt engineering techniques to optimize outputs from generative models and ensure content quality Research & Innovation: Stay current with latest AI/ML advancements, research new architectures and techniques, and build proof-of-concept implementations Technical Implementation Deploy AI micro services and agents using containerization (Docker) and orchestration tools Collaborate with cross-functional teams (product, design, engineering) to align AI features with business requirements Create comprehensive documentation including system diagrams, API specifications, and implementation guides Analyze model performance metrics, document findings, and propose data-driven improvements Participate in code reviews and contribute to best practices for AI/ML development Required QualificationsEducation & Experience Currently pursuing or recently completed Bachelor's/Master's degree in Computer Science, Data Science, AI/ML, or related field 6+ months of hands-on experience with AI/ML projects (academic, personal, or professional) Demonstrable portfolio of AI/ML projects via GitHub repositories, Jupyter notebooks, or deployed applications Technical Skills Programming: Strong Python proficiency with experience in AI/ML libraries (NumPy, Pandas, Scikit-learn) LLM Experience: Practical experience with large language models (OpenAI GPT, Claude, open-source models) including API integration and fine-tuning AI Frameworks: Familiarity with at least one: LangChain, OpenAI Agents SDK, AutoGen, or similar agentic AI frameworks RAG Architecture: Understanding of RAG system components and prior implementation experience (even in academic projects) Vector Databases: Experience with vector similarity search using FAISS, Chroma, Pinecone, or similar tools Deep Learning: Familiarity with PyTorch or TensorFlow for model development and fine-tuning Screening Criteria To effectively evaluate candidates, we will assess: Portfolio Quality: Live demos or well-documented projects showing AI/ML implementation Technical Depth: Ability to explain RAG architecture, vector embeddings, and LLM fine-tuning concepts Problem-Solving: Approach to handling real-world AI challenges like hallucination, context management, and model evaluation Code Quality: Clean, documented Python code with proper version control practices Preferred QualificationsAdditional Technical Skills Full-Stack Development: Experience building web applications with AI/ML backends Data Analytics: Proficiency in data manipulation (Pandas/SQL), visualization (Matplotlib/Seaborn), and statistical analysis MLOps/DevOps: Experience with Docker, Kubernetes, MLflow, or CI/CD pipelines for ML models Cloud Platforms: Familiarity with AWS, Azure, or GCP AI/ML services Databases: Experience with both SQL (PostgreSQL) and NoSQL (Elasticsearch, MongoDB) databases Soft Skills & Attributes Analytical Mindset: Strong problem-solving skills with attention to detail in model outputs and data quality Communication: Ability to explain complex AI concepts clearly to both technical and non-technical stakeholders Collaboration: Proven ability to work effectively in cross-functional teams Learning Agility: Demonstrated ability to quickly adapt to new technologies and frameworks Initiative: Self-motivated with ability to work independently and drive projects forward What We OfferProfessional Growth Mentorship: Work directly with senior AI engineers and receive structured guidance Real Impact: Contribute to production AI systems used by real customers Learning Opportunities: Access to latest AI tools, frameworks, and industry conferences Full-Time Conversion: Potential for full-time offer based on performance and business needs Work Environment Employee-First Culture: Flexible work arrangements with emphasis on results Innovation Focus: Opportunity to work on cutting-edge AI applications Collaborative Team: Supportive environment that values diverse perspectives and ideas Competitive Compensation: Market-competitive internship stipend Application RequirementsPortfolio Submission Please include the following in your application: GitHub Repository: Link to your best AI/ML projects with detailed README files Project Demo: Video walkthrough or live demo of your most impressive AI application Technical Blog/Documentation: Any technical writing about AI/ML concepts or implementations Resume: Highlighting relevant coursework, projects, and any AI/ML experience Technical Assessment Qualified candidates will complete a technical assessment covering: Python programming and AI/ML libraries LLM integration and prompt engineering RAG system design and implementation Vector database operations and similarity search Model evaluation and optimization techniques Ready to shape the future of AI? Apply now and join our team of innovative engineers building next-generation AI solutions.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies