Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Title : Senior Machine Learning Developer Location : Pan India Job Type : Full-Time Experience Level : 5- 10 Years Job Description We are seeking a highly skilled and experienced Senior Machine Learning Developer to join our dynamic team. The ideal candidate will have a strong background in machine learning (ML), deep learning (DL), and large language models (LLMs). They should be proficient in Python and possess expertise in conventional ML algorithms, DL techniques, LLM and prompt engineering. Desire Skills RAG-Based Domain-Specific Q&A System : Should have built a chatbot that answers questions from custom documents (e.g., legal contracts, company policies) using LLM + Vector DB (RAG). Agentic AI Workflow Assistant (Multi-Step Planner) : Should have created an LLM agent that performs tasks like Book my flight + hotel + calendar block via API tools using LangGraph or AutoGen. Multimodal RAG for Image + Text Search : Worked on system that allows users to ask questions like Find me the slide where person X talks about success in life in a YouTube video or PDF deck. Document Parsing Using OCR + Transformers : Extract structured data from messy PDFs using Tesseract + LayoutLM or Donut Responsibilities Machine Learning Development : Design, develop, and implement ML models and algorithms to solve complex problems. Work on various ML projects involving data preprocessing, feature engineering, model selection, and evaluation. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Deep Learning Develop and deploy DL models using frameworks like TensorFlow, PyTorch, or Keras. Optimize DL models for performance and scalability. Stay updated with the latest advancements in DL and apply them to improve existing models. Large Language Models Develop and fine-tune large language models for various NLP tasks. Implement prompt engineering techniques to enhance model performance and accuracy. Experiment with state-of-the-art LLM architectures and methodologies. Software Development Write clean, maintainable, and efficient code in Python. Conduct code reviews and ensure adherence to best practices and coding standards. Implement version control using Git and collaborate with the development team through GitHub or similar platforms. Data Handling Work with large datasets and perform data preprocessing, cleaning, and augmentation. Implement data pipelines and ensure data integrity and quality throughout the ML lifecycle. Research And Innovation Stay abreast of the latest research and developments in the field of ML, DL, and LLMs. Propose and explore innovative solutions to improve model performance and address new challenges. Mentorship Mentor and guide junior ML engineers and team members. Share knowledge and best practices within the team to foster a collaborative and learning Environment (ref:hirist.tech)
Posted 1 month ago
2.0 - 31.0 years
8 - 14 Lacs
Gurgaon/Gurugram
On-site
Job Title: AI Engineer Location: Work From Office | 5 Days a Week Company: Kale Logistics About the Role: Kale Logistics is looking for a highly skilled AI Engineer to lead the development of next-generation Generative AI and Agentic AI applications. You will design, build, and deploy AI-driven solutions that integrate with web-based applications, collaborating with cross-functional teams to solve real-world problems in logistics and supply chain. Must-Have Qualifications: Education: Bachelor’s/Master’s degree in Computer Science or a related field from IIT/NIT/BITS or an equivalent top-tier institute. Languages: Proficiency in Python and its frameworks. Frameworks/Tools: Hands-on experience with any of the following: LangChain, LangGraph, AutoGen, Hugging Face, CrewAI OpenAI APIs, PyTorch, TensorFlow Core Skills: AI/ML fundamentals, algorithms, data preprocessing techniques Experience in developing, deploying, and maintaining AI/ML models Nice to Have: Exposure to MLOps tools and practices Experience with cloud services (AWS, Azure, GCP), Docker, Kubernetes Familiarity with CI/CD pipelines and automation frameworks Key Responsibilities: Develop and deploy scalable AI-powered applications using Python. Leverage Generative AI and Agentic AI frameworks like LangChain, Hugging Face, CrewAI, and others to build robust AI solutions. Design and implement RESTful APIs for seamless integration of AI models with applications. Program and fine-tune models using frameworks like PyTorch, TensorFlow, and OpenAI APIs. Work on data preparation, model deployment, and optimization workflows. Apply MLOps practices for model lifecycle management, monitoring, and CI/CD. Collaborate with product, engineering, and UX/UI teams to align AI solutions with business goals. Research, experiment, and deploy emerging AI technologies into practical applications. Troubleshoot production issues, conduct root cause analysis (RCA), and support deployments in collaboration with operations teams. Ensure performance, scalability, and security of AI-driven applications in the cloud environment. Soft Skills: Passionate about AI, Generative AI, and new technologies. Self-driven, proactive, and able to work both independently and collaboratively. Strong problem-solving and analytical skills. Clear communicator, capable of explaining technical concepts to non-technical stakeholders. Why Join Kale Logistics? Join a forward-thinking company revolutionizing air and port cargo logistics with AI. Work with a team passionate about innovation, technology, and excellence. Hands-on exposure to cutting-edge AI frameworks like LangChain, Hugging Face, and more. Opportunities for professional growth, learning, and working with the latest tech stacks. Competitive compensation and benefits package.
Posted 1 month ago
5.0 years
20 - 30 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 5.00 + years Salary : INR 2000000-3000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Chennai) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SOL-X) (*Note: This is a requirement for one of Uplers' client - SOL-X) What do you need for this opportunity? Must have skills required: Communication, ETL, Health domain, operational analytics., safety Domain, Data Analyst, Data Handeling, Metabase, NoSQL databases, Pandas/NumPy/scikit-learn, TensorFlow or PyTorch, Python, Tableau SOL-X is Looking for: About Company Data Analyst Job Description SOL-X is a Maritime Safety and Crew Wellbeing solution for a holistic crew well-being framework, empowering workers to use digital products that actively support the physical, mental, and emotional well-being. Through advanced analytics and behavioural modelling, SOL-X provides deep insights into how to improve operational safety and empower remote workers to manage their wellbeing. About Role We are looking for an experienced Data Analyst to lead our predictive analytics initiatives. You will be responsible for developing robust models and interactive dashboards to extract actionable insights from large, enterprise-level data sets. This role involves end-to-end data management—from collection and preprocessing to model development and deployment—using modern tools and technologies to support data-driven business decisions in a safety-critical domain. Responsibilities Data Acquisition & Preparation: Collect, preprocess, and clean large, complex datasets from multiple sources. Develop and manage robust data pipelines to ensure seamless data ingestion and quality. Model Development & Experimentation: Build, test, and implement predictive and explanatory models using machine learning frameworks such as scikit-learn, TensorFlow, or PyTorch. Design and conduct experiments to validate models and continuously improve performance. Dashboard & Reporting: Develop interactive dashboards using visualization tools (e.g., Tableau, Metabase, Matplotlib, Seaborn) that enable stakeholders to make informed decisions. Prepare detailed reports and presentations to communicate insights, trends, and recommendations. Collaboration & Continuous Improvement: Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications. Stay up-to-date with emerging trends and technologies in data science and machine learning, integrating best practices into ongoing projects. Qualifications 5–7 years of experience in data collection, preprocessing, and analysis of large enterprise-level data sets. Proven expertise in statistical analysis, predictive modeling, and explanatory modeling techniques. Strong proficiency in Python and experience with libraries such as Pandas, NumPy, scikit-learn, and frameworks like TensorFlow or PyTorch. Extensive experience with data visualization tools such as Tableau, Metabase, Matplotlib, and Seaborn. Hands-on experience working with NoSQL databases. Demonstrated ability to extract meaningful insights from complex data sets and effectively communicate them to stakeholders. Nice To Have / Preferred Qualifications Experience with building ETL pipelines is a plus. Prior experience in data engineering and managing data pipelines. Background in safety or health analytics projects. Excellent communication skills and a collaborative mindset. If you’re passionate about leveraging data to drive operational excellence and support strategic decision- making, we’d love to hear from you. Join Sol-X and be part of a dynamic team committed to making a significant impact! Engagement Type: Direct hire on SOL-X Payroll Job Type: Permanent Location: Chennai - Complete Onsite Working time: 9:00 AM to 6:00 PM Interview Process - 3- 4 Rounds How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
2.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Highspot Highspot is a software product development company and a recognized global leader in the sales enablement category, leveraging cutting-edge AI and GenAI technologies at the core of its robust Software-as-a-Service (SaaS) platform. Highspot is revolutionizing how millions of individuals work worldwide. Through its AI-powered platform, Highspot drives enterprise transformation to empower sales teams through intelligent content management, training, contextual guidance, customer engagement, meeting intelligence, and actionable analytics. The Highspot platform delivers advanced features tailored to business needs, in a modern design that sales and marketing executives appreciate and is the #1 rated sales enablement platform on G2 Crowd. While headquartered in Seattle, Highspot has expanded its footprint across America, Canada, the UK, Germany, Australia, and now India, solidifying its presence in the Asia Pacific markets. About The Role You will safeguard the quality of our AI and GenAI features by evaluating model outputs, creating “golden” datasets, and guiding continuous improvements in collaboration with data scientists and engineers. Be the guide to the team as the team creates a robust methodology and framework that will drive evaluation of hundreds of AI agents. Responsibilities Validate AI Output – Review model results across text, documents, audio, and video; flag errors and improvement opportunities. Create Golden Datasets – Build and maintain high-quality reference data with help from subject-matter experts. Data Annotation – Accurately label multi-modal data and perform light preprocessing (e.g., text cleanup, image resizing). Quality Analytics – Run accuracy metrics, trend analyses, and clustering to gauge model performance. Fine-Tune Model Code – Adjust training scripts and parameters to boost accuracy and keep behavior consistent across models. Process Improvement & Documentation – Refine annotation workflows and keep clear records of methods, findings, and best practices. Cross-Functional Collaboration – Partner with ML engineers, product managers, and QA peers to ship reliable, user-ready features. Required Qualifications 2 to 4 years of experience in data science/AI/ML. Specific experience in evaluation of AI results is strongly preferred Working knowledge of AI/ML evaluation techniques Solid analytical skills and meticulous attention to detail. Bachelor’s or Master’s in Computer Science, Data Science, or a related field. Strong English written and verbal communication. Self-directed, organized, and comfortable with ambiguous problem spaces. Equal Opportunity Statement We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of age, ancestry, citizenship, color, ethnicity, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or invisible disability status, political affiliation, veteran status, race, religion, or sexual orientation. Did you read the requirements as a checklist and not tick every box? Don't rule yourself out! If this role resonates with you, hit the ‘apply’ button.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Gen AI Engineer Location: Hyderabad Job Description Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role – Gen AI Engineer - Python Location - Hyderabad Mode of Interview - Virtual Interview Data - 5th June 2025 (Saturday) Job Description Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients
Posted 1 month ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
Area(s) of responsibility The Implementation Technical Architect will be responsible for designing, developing, and deploying cutting-edge Generative AI (GenAI) solutions using the latest Large Language Models (LLMs) and frameworks. This role requires deep expertise in Python programming, cloud platforms (Azure, GCP, AWS), and advanced AI techniques such as fine-tuning, LLMOps, and Responsible AI. The architect will lead the development of scalable, secure, and efficient GenAI applications, ensuring alignment with business goals and technical requirements. Design and Architecture: Create scalable and modular architecture for GenAI applications using frameworks like Autogen, Crew.ai, LangGraph, LlamaIndex, and LangChain. Python Development: Lead the development of Python-based GenAI applications, ensuring high-quality, maintainable, and efficient code. Data Curation Automation: Build tools and pipelines for automated data curation, preprocessing, and augmentation to support LLM training and fine-tuning. Cloud Integration: Design and implement solutions leveraging Azure, GCP, and AWS LLM ecosystems, ensuring seamless integration with existing cloud infrastructure. Fine-Tuning Expertise: Apply advanced fine-tuning techniques such as PEFT, QLoRA, and LoRA to optimize LLM performance for specific use cases. LLMOps Implementation: Establish and manage LLMOps pipelines for continuous integration, deployment, and monitoring of LLM-based applications. Responsible AI: Ensure ethical AI practices by implementing Responsible AI principles, including fairness, transparency, and accountability. RLHF and RAG: Implement Reinforcement Learning with Human Feedback (RLHF) and Retrieval-Augmented Generation (RAG) techniques to enhance model performance. Modular RAG Design: Develop and optimize Modular RAG architectures for complex GenAI applications. Open Source Collaboration: Leverage Hugging Face and other open-source platforms for model development, fine-tuning, and deployment. Front-End Integration: Collaborate with front-end developers to integrate GenAI capabilities into user-friendly interfaces. SDLC and DevSecOps: Implement secure software development lifecycle (SDLC) and DevSecOps practices tailored to LLM-based projects. Technical Documentation: Create detailed design artifacts, technical specifications, and architecture diagrams for complex projects. Stakeholder Collaboration: Work closely with cross-functional teams, including data scientists, engineers, and product managers, to deliver high-impact solutions.
Posted 1 month ago
5.0 years
0 Lacs
Hyderābād
On-site
Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a highly skilled and visionary Senior Embedded Systems Architect to lead the design and implementation of next-generation AI-powered embedded platforms. This role demands deep technical proficiency across embedded systems, AI model deployment, hardware–software co-design, and media-centric inference pipelines. You will architect full-stack embedded AI solutions using custom AI accelerators such as Google Coral (Edge TPU), Hailo, BlackHole (Torrent), and Kendryte, delivering real-time performance in vision, audio, and multi-sensor edge deployments. The ideal candidate brings a combination of system-level thinking, hands-on prototyping, and experience in optimizing AI workloads for edge inference. This is a high-impact role where you will influence product architecture, ML tooling, hardware integration, and platform scalability for a range of IoT and intelligent device applications. Requirements Key Responsibilities ️ System Architecture & Design Define and architect complete embedded systems for AI workloads — from sensor acquisition to real-time inference and actuation . Design multi-stage pipelines for vision/audio inference: e.g., ISP preprocessing CNN inference postprocessing. Evaluate and benchmark hardware platforms with AI accelerators (TPU/NPU/DSP) for latency, power, and throughput. Edge AI & Accelerator Integration Work with Coral, Hailo, Kendryte, Movidius, and Torrent accelerators using their native SDKs (EdgeTPU Compiler, HailoRT, etc.). Translate ML models (TensorFlow, PyTorch, ONNX) for inference on edge devices using cross-compilation , quantization , and toolchain optimization . Lead efforts in compiler flows such as TVM, XLA, Glow, and custom runtime engines. ️ Media & Sensor Processing Pipelines Architect pipelines involving camera input , ISP tuning , video codecs , audio preprocessors , or sensor fusion stacks . Integrate media frameworks such as V4L2 , GStreamer , and OpenCV into real-time embedded systems. Optimize for frame latency, buffering, memory reuse, and bandwidth constraints in edge deployments. ️ Embedded Firmware & Platform Leadership Lead board bring-up, firmware development (RTOS/Linux), peripheral interface integration, and low-power system design. Work with engineers across embedded, AI/ML, and cloud to build robust, secure, and production-ready systems. Review schematics and assist with hardware–software trade-offs, especially around compute, thermal, and memory design. Required Qualifications Education: BE/B.Tech/M.Tech in Electronics, Electrical, Computer Engineering, Embedded Systems, or related fields. Experience: Minimum 5+ years of experience in embedded systems design. Minimum 3 years of hands-on experience with AI accelerators and ML model deployment at the edge. Technical Skills Required Embedded System Design Strong C/C++, embedded Linux, and RTOS-based development experience. Experience with SoCs and MCUs such as STM32, ESP32, NXP, RK3566/3588, TI Sitara, etc. Cross-architecture familiarity: ARM Cortex-A/M, RISC-V, DSP cores. ML & Accelerator Toolchains Proficiency with ML compilers and deployment toolchains: ONNX, TFLite, HailoRT, EdgeTPU compiler, TVM, XLA . Experience with quantization , model pruning , compiler graphs , and hardware-aware profiling . Media & Peripherals Integration experience with camera modules , audio codecs , IMUs , and other digital/analog sensors . Experience with V4L2 , GStreamer , OpenCV , MIPI CSI , and ISP tuning is highly desirable. System Optimization Deep understanding of compute budgeting , thermal constraints , memory management , DMA , and low-latency pipelines . Familiarity with debugging tools: JTAG , SWD , logic analyzers , oscilloscopes , perf counters , and profiling tools. Preferred (Bonus) Skills Experience with Secure Boot , TPM , Encrypted Model Execution , or Post-Quantum Cryptography (PQC) . Familiarity with safety standards like IEC 61508 , ISO 26262 , UL 60730 . Contributions to open-source ML frameworks or embedded model inference libraries. Why Join Us? At EURTH TECHTRONICS PVT LTD , you won't just be optimizing firmware — you will architect full-stack intelligent systems that push the boundary of what's possible in embedded AI. Work on production-grade, AI-powered devices for industrial, consumer, defense, and medical applications . Collaborate with a high-performance R&D team that builds edge-first, low-power, secure, and scalable systems . Drive core architecture and set the technology direction for a fast-growing, innovation-focused organization. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.
Posted 1 month ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Job Title : Python Backend Developer - FastAPI, PostgreSQL, Pattern Recognition Location : Remote / Hybrid Type: Full-time Experience: 3 to 7 Years Compensation: USDT only | Based on skill and performance About Us: We are a cutting-edge fintech startup building an Al-powered trading intelligence platform that integrates technical analysis, machine learning, and real-time data processing. Our systems analyze financial markets using custom algorithms to detect patterns, backtest strategies, and deploy automated insights. We're seeking a skilled Python Backend Developer experienced in FastAPI, PostgreSQL, pattern recognition, and financial data workflows. Key Responsibilities Implement detection systems for chart patterns, candlestick patterns, and technical indicators (e.g.,RSI, MACD, EMA) Build and scale high-performance REST APIs using FastAPI for real-time analytics and model communication Develop semi-automated pipelines to label financial datasets for supervised/unsupervised ML models Implement and maintain backtesting engines for trading strategies using Python and custom simulation logic Design and maintain efficient PostgreSQL schemas for storing candle data, trades, indicators, and model metadata (Optional) Contribute to frontend integration using Next.js/React for analytics dashboards and visualizations Key Requirements Python (3-7 years): Strong programming fundamentals, algorithmic thinking, and deep Python ecosystem knowledge FastAPI: Proven experience building scalable REST APIs PostgreSQL: Schema design, indexing, complex queries, and performance optimization Pattern Recognition: Experience in chart/candlestick detection, TA-Lib, rule-based or ML-based identification systems Technical Indicators: Familiarity with RSI, Bollinger Bands, Moving Averages, and other common indicators Backtesting Frameworks: Hands-on experience with custom backtesting engines or libraries like Backtrader, PyAlgoTrade Data Handling: Proficiency in NumPy, Pandas, and dataset preprocessing/labeling techniques Version Control: Git/GitHub - comfortable with collaborative workflows Bonus Skills Experience in building dashboards with Next.js / React Familiarity with Docker, Celery, Redis, Plotly, or TradingView Charting Library Previous work with financial datasets or real-world trading systems Exposure to Al/ML model training, SHAP/LIME explainability tools, or reinforcement learning strategies Ideal Candidate Passionate about financial markets and algorithmic trading systems Thrives in fast-paced, iterative development environments Strong debugging, data validation, and model accuracy improvement skills Collaborative mindset - able to work closely with quants, frontend developers, and ML engineers What You'll Get Opportunity to work on next-gen fintech systems with real trading applications Exposure to advanced AI/ML models and live market environments Competitive salary + performance-based bonuses Flexible working hours in a remote-first team
Posted 1 month ago
3.0 years
1 - 3 Lacs
India
Remote
We are looking for a skilled and passionate AI Developer with 3+ years of hands-on experience in building and deploying AI/ML solutions. The ideal candidate will have a strong foundation in data science, machine learning algorithms, and deep learning frameworks, and will be responsible for developing scalable AI applications tailored to our business needs. Key Responsibilities: Design, develop, and deploy machine learning models and AI-driven applications. Collaborate with data engineers and software developers to integrate AI models into production systems. Conduct data preprocessing, feature engineering, and model tuning. Research and implement state-of-the-art ML/DL algorithms for predictive analytics, NLP, computer vision, etc. Monitor and evaluate model performance and retrain when necessary.Stay updated with the latest AI trends, technologies, and best practices. Required Skills & Qualifications Bachelor’s/Master’s degree in Computer Science, Data Science, AI, or related field. 3+ years of professional experience in AI/ML development. Proficient in Python and common ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with NLP, computer vision, recommendation systems, or other AI domains.Solid understanding of data structures, algorithms, and software design principles. Experience with cloud platforms (AWS, Azure, GCP) is a plus. Excellent problem-solving skills and ability to work in a collaborative team environment. Preferred Qualifications: Experience deploying AI models using REST APIs, Docker, or Kubernetes. Exposure to MLOps tools and frameworks. Contribution to open-source AI/ML projects or publications in the field. If you are passionate and have the required expertise, we would love to hear from you. Please send your resume to anisha.mohan@pearlsofttechnologies.co.in. Join us at PearlSoft Technologies and be a part of a team that creates innovative solutions for businesses worldwide! Job Type: Full-time Pay: ₹15,348.97 - ₹25,000.00 per month Benefits: Work from home Location Type: In-person Schedule: Day shift Work Location: In person Application Deadline: 07/07/2025 Expected Start Date: 07/07/2025
Posted 1 month ago
55.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way you’d like, where you’ll be supported and inspired by a collaborative community of colleagues around the world, and where you’ll be able to reimagine what’s possible. Join us and help the world’s leading organizations unlock the value of technology and build a more sustainable, more inclusive world. Your Role Must have experience with Machine Learning Model Development Expert Level Proficiency in Data Handling (SQL) Hands-on with Model Engineering and Improvement Strong experience in Model Deployment and Productionlization Your Profile Extensive experience in developing and implementing machine learning, Deep Learning, NLP models across various domains, Strong proficiency in Python, including relevant libraries and frameworks for machine learning, such as scikit-learn, XGBoost and Keras. Ability to write efficient, clean, and scalable code for model development and evaluation. Proven experience in enhancing model accuracy through engineering techniques, such as feature engineering, feature selection, and data augmentation. Ability to analyze model samples in relation to model scores, identify patterns, and iteratively refine models for improved performance. Strong expertise in deploying machine learning models to production systems. Familiarity with the end-to-end process of model deployment, including data preprocessing, model training, optimization, and integration into production environments. What You Will Love About Working Here We recognise the significance of flexible work arrangements to provide support in hybrid mode, you will get an environment to maintain healthy work life balance Our focus will be your career growth & professional development to support you in exploring the world of opportunities. Equip yourself with valuable certifications & training programmes in the latest technologies such as AIML, Machine Learning Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 1 month ago
2.0 years
6 - 7 Lacs
Gurgaon
On-site
About the Role: Grade Level (for internal use): 08 The Role: Implementation Engineer The Team: The Portfolio Analytics (PA) Application Support team is responsible for onboarding and supporting clients on the Capital IQ platform. Historically focused on manual ingestion workflows and reactive troubleshooting, the team is now evolving to take on a more technical, proactive, and scalable approach. This role will be key in driving that transformation – building automation, scripting ingestion processes, and supporting API-based integrations. The Impact: This is not just another support role. You will be part of the team’s next chapter – helping shift from manual operations to automated, scalable solutions . Your technical contributions will improve the client onboarding experience, reduce turnaround time, enhance data accuracy, and free up the team to focus on more strategic tasks. You’ll work across client use cases, internal tooling, and automation pipelines – directly shaping the future of support within Portfolio Analytics. Responsibilities: Support client onboarding by implementing batch ingestion jobs and API integrations Develop and maintain Python scripts for data preprocessing and validation workflows Help automate recurring ingestion tasks , moving away from manual interventions Troubleshoot ingestion or integration issues by working across data, scripts, and scheduling systems Build internal tools and reusable components to support ingestion and workflow scalability Collaborate with Product and Engineering on testing and deploying new ingestion features Maintain technical documentation to standardize and scale implementation support Partner with client-facing teams to deeply understand use cases and deliver technical solutions What We’re Looking For: 0–2 years of experience in a technical role (support, implementation, automation, or scripting) Proficiency or strong familiarity with Python and basic scripting logic Understanding of REST APIs and data formats such as JSON, CSV, Excel Exposure to SQL or file-based data manipulation tools (e.g., pandas, shell scripts) Interest in solving operational problems through automation and process improvement Ability to learn financial data structures such as portfolios, benchmarks, and securities Strong analytical and troubleshooting skills, with excellent attention to detail Bachelor’s degree in Computer Science, Engineering, or a related discipline Why This Role Matters Now: As we scale Portfolio Analytics across a growing client base, ingestion and automation will be at the core of our success. This role is an opportunity to join that journey early – helping redefine how support is delivered and building the technical foundation for what comes next. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH203 - Entry Professional (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312027 Posted On: 2025-07-01 Location: Gurgaon, India
Posted 1 month ago
0.0 - 2.0 years
6 - 8 Lacs
Gurgaon
On-site
Overview: Keysight is on the forefront of technology innovation, delivering breakthroughs and trusted insights in electronic design, simulation, prototyping, test, manufacturing, and optimization. Our ~15,000 employees create world-class solutions in communications, 5G, automotive, energy, quantum, aerospace, defense, and semiconductor markets for customers in over 100 countries. Learn more about what we do. We are seeking a passionate and skilled AI Engineer with 0-2 years of exerience to join our R&D team, focusing on the design, development, and deployment of intelligent chatbot systems and robust data engineering pipelines. This role bridges conversational AI and data infrastructure, enabling scalable, intelligent digital assistants that enhance user experience and operational efficiency. Our award-winning culture embraces a bold vision of where technology can take us and a passion for tackling challenging problems with industry-first solutions. We believe that when people feel a sense of belonging, they can be more creative, innovative, and thrive at all points in their careers. Responsibilities: Key Responsibilities Chatbot Design & Development Design, develop, and deploy AI-powered chatbots using NLP and LLM frameworks (e.g., Open Source, Rasa, Dialogflow, OpenAI, Azure Bot Framework). Collaborate with UX and product teams to define conversational flows and user intents. Continuously improve chatbot performance through user feedback, analytics, and retraining cycles. Ensure chatbot compliance with data privacy and governance standards. Data Engineering Build and maintain scalable data pipelines to support chatbot training and analytics. Integrate structured and unstructured data from multiple sources (e.g., APIs, databases, logs). Implement data preprocessing, transformation, and storage strategies optimized for AI/ML workloads. Collaborate with data scientists to ensure data readiness for model training and evaluation. Qualifications: Required Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI/ML, or related field. 0-2 years of experience in AI/ML engineering or data engineering roles. Proficiency in Python and familiarity with Java or Scala. Strong software development and collaboration skills. Experience with NLP libraries (e.g., spaCy, Hugging Face Transformers) and chatbot platforms. Hands-on experience with data pipeline tools (e.g., Apache Airflow, Spark, Kafka). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization (Docker, Kubernetes). Strong understanding of data modeling, ETL processes, and data governance. Preferred Skills Exposure to MLOps practices and tools (e.g., MLflow, Kubeflow). Experience with LLM fine-tuning and prompt engineering. Knowledge of RESTful APIs and microservices architecture. Familiarity with analytics dashboards and chatbot performance metrics. Careers Privacy Statement***Keysight is an Equal Opportunity Employer.***
Posted 1 month ago
0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 85231 Date: Jul 1, 2025 Location: Delhi Designation: Manager Entity: Deloitte South Asia LLP We are looking for a highly skilled and experienced Generative AI with expertise in Python, Machine Learning, Data Science, and Statistics. The ideal candidate should have a strong background in Generative AI, NLP (Natural Language Processing), and conversational chatbot development. Additionally, experience with Lang Chain, Langraph, using LLM's and proficiency in cloud platforms like GCP, Azure, AWS is required. The candidate should also have a basic understanding of application deployments using frameworks such as Flask, Django, Fast API As a Generative AI Senior Consultant, you will be responsible for delivering innovative AI solutions to our clients. Your role will involve: Design, develop, and deploy generative AI models. Leveraging your expertise in Generative AI, Python, Machine Learning, Data Science, and Statistics to develop cutting-edge solutions for our clients. Utilizing NLP techniques, Lang Chain, and LLM's to develop conversational chatbots and language models tailored to our clients' needs. Collaborating with cross-functional teams to design and implement advanced AI models and algorithms. Providing technical expertise and thought leadership in the field of Generative AI and NLP to guide clients in adopting AI-driven solutions. Conducting data analysis, preprocessing, and modeling to extract valuable insights and drive data-driven decision-making. Staying up to date with the latest advancements in AI technologies, frameworks, and tools, and proactively learning and adopting new technologies to enhance our offerings. Demonstrating a strong understanding of cloud platforms, particularly GCP, for deploying AI applications. Should have strong knowledge and experience in working with Deep Learning projects using CNN, GAN, Transformers, Encoder and decoder algorithms or any other image generation and classification use cases. Familiarity with LLM's (Large Language Modeling) and their applications. Proficiency in cloud platforms like GCP/Azure/AWS, including experience with deploying AI applications. Solid programming skills in Python and experience with relevant libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Experience in developing web applications using frameworks such as Flask, Django, Fast API. Proven track record of delivering successful AI projects and driving business impact. Excellent communication, presentation, and documentation skills. Strong problem-solving abilities and a proactive attitude towards learning and adopting new technologies. Ability to work independently, manage multiple projects simultaneously, and collaborate effectively with diverse stakeholders.
Posted 1 month ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles & Responsibilities: Understand and design solutions to Business problems leveraging Data science and AI Should be able to understand and gather the requirement from Stakeholders. Propose and execute the solution and present the deliverables to Stakeholders. Manage and optimize the deliverables. Mentor and Train new team members. Develop POCs to enhance teams’ capability. Skillsets 1: Strong understanding of Math, Statistics and the theoretical foundations of Statistical & Machine Learning, Parametric and Non-parametric models. Experience training and evaluating ML models. Perform statistical analysis of results and refine models. Strong understanding of advanced data mining techniques, curating, processing and transforming data to produce sound datasets. Strong understanding of the Machine Learning lifecycle - feature engineering, training, validation, scaling, deployment, scoring, monitoring, and feedback loop. Understanding of NLP techniques for text representation, semantic extraction techniques, data structures, and modelling. Use various statistical techniques and ML methods to perform predictive modelling/classification for problems around client, distribution, sales, client profiles, segmentation and provide relevant & actionable recommendations/insights for the business. Stronghold on Visualization tool (or libraries) like SAS Visual Analytics, Tableau, Python, etc. Expert in working on Statistical tools (Python/R/SAS). Experience with cloud computing infrastructure like AWS/Azure/GCP. Able to develop, test and deploy models on Cloud/Web. Skillsets 2: 1.BERT and Transformer Models: Proficiency in working with BERT and other transformer-based models for various NLP tasks, such as text classification, named entity recognition, question answering, and text generation. Familiarity with fine-tuning BERT on specific downstream tasks using techniques like transfer learning and domain adaptation. 2.Natural Language Processing (NLP): Strong understanding of NLP techniques, including text preprocessing, information extraction, text representation, sentiment analysis, named entity recognition, topic modeling, and information retrieval. 3.Machine Learning and Deep Learning: Proficiency in machine learning and deep learning algorithms and frameworks, such as TensorFlow, PyTorch, scikit-learn, or Keras. Ability to train and evaluate NLP models using supervised and unsupervised learning techniques. 4.Programming Languages: Proficiency in programming languages commonly used in NLP, such as Python, Java, or C++. Ability to write efficient and clean code for data manipulation, model development, and deployment. 5.NLP Libraries and Tools: Familiarity with popular NLP libraries and tools, such as NLTK, SpaCy, Gensim, Stanford NLP, or CoreNLP. Ability to leverage existing librariesfor various NLP tasks and adaptthem to specific use cases. 8.Continuous Learning: Proactive approach to staying updated with the latest advancements in NLP research, technologies, and best practices. Active participation in conferences, workshops, and online forums. Qualifications MCA/BE/B. Tech. /MBA/M.Stat./M.sc. or equivalent degree in Quant. from reputed institute. 3+ years of relevant experience. Soft Skills: Excellent organizational skills and ability to prioritize wide range of tasks Demonstrated initiative and creativity - ability to influence (add value) Strong interpersonal, communication, organizational and planning skills.
Posted 1 month ago
1.0 years
0 Lacs
India
On-site
About Crudcook Crudcook is a trusted partner in Digital Engineering and Enterprise Modernization , with offices across India, Dubai, the United States, and Canada . We collaborate with leading organizations to transform innovative ideas into scalable technology solutions. At Crudcook, we deeply value curiosity, collaboration, and continuous learning . Role Overview – Machine Learning Engineer Are you passionate about Machine Learning and AI and have completed an internship or academic projects in this field? We are hiring entry-level ML Engineers. This is a great opportunity to take your academic or internship experience to the next level and work on real-world applications in a fast-paced, impact-driven environment. Key Responsibilities Assist in building and deploying machine learning models for real-world use cases. Work on data collection, preprocessing, and feature engineering pipelines. Contribute to the training, testing, and tuning of models using Python and ML frameworks. Collaborate with product, engineering, and data teams to translate business problems into ML solutions. Document work, learn from senior engineers, and contribute to knowledge sharing within the team. Stay updated with the latest research, trends, and tools in AI/ML. Required Qualifications 0–1 year of experience through internships or academic projects in Machine Learning / Data Science . Proficiency in Python and hands-on familiarity with libraries like scikit-learn, pandas , and at least one of TensorFlow or PyTorch . Strong understanding of ML concepts such as supervised/unsupervised learning, model evaluation, and overfitting. Experience working with datasets (Kaggle, personal projects, internships, etc.). Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering , or a related field. Good problem-solving skills, strong communication, and a learning mindset. Why Join Us? Work on high-impact AI projects from day one. Mentorship from experienced AI engineers. Collaborative and fast-growing team focused on building world-class tech solutions. Be part of a mission-led company building global products with a local heart.
Posted 1 month ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Join PwC US - Acceleration Center as a Manager of GenAI Data Science to lead innovative projects and drive significant advancements in GenAI solutions. We offer a competitive compensation package, a collaborative work environment, and ample opportunities for professional growth and impact. Years of Experience: Candidates with 8+ years of hands on experience Responsibilities Lead and mentor a team of data scientists in understanding business requirements and applying GenAI technologies to solve complex problems. Oversee the development, implementation, and optimization of machine learning models and algorithms for various GenAI projects. Direct the data preparation process, including data cleaning, preprocessing, and feature engineering, to ensure data quality and readiness for analysis. Collaborate with data engineers and software developers to streamline data processing and integration into machine learning pipelines. Evaluate model performance rigorously using advanced metrics and testing methodologies to ensure robustness and effectiveness. Spearhead the deployment of production-ready machine learning applications, ensuring scalability and reliability. Apply expert programming skills in Python, R, or Scala to develop high-quality software components for data analysis and machine learning. Utilize Kubernetes for efficient container orchestration and deployment of machine learning applications. Design and implement innovative data-driven solutions such as chatbots using the latest GenAI technologies. Communicate complex data insights and recommendations to senior stakeholders through compelling visualizations, reports, and presentations. Lead the adoption of cutting-edge GenAI technologies and methodologies to continuously improve data science practices. Champion knowledge sharing and skill development within the team to foster an environment of continuous learning and innovation. Requirements 8-10 years of relevant experience in data science, with significant expertise in GenAI projects. Advanced programming skills in Python, R, or Scala, and proficiency in machine learning libraries like TensorFlow, PyTorch, or scikit-learn. Extensive experience in data preprocessing, feature engineering, and statistical analysis. Strong knowledge of cloud computing platforms such as AWS, Azure, or Google Cloud, and data visualization techniques. Demonstrated leadership in managing data science teams and projects. Exceptional problem-solving, analytical, and project management skills. Excellent communication and interpersonal skills, with the ability to lead and collaborate effectively in a dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Proven track record of developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations in a corporate setting. Relevant advanced certifications in data science or GenAI technologies. Nice To Have Skills Experience with specific tools such as Azure AI Search, Azure Document Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, and AWS Bedrock. Familiarity with LLM backed agent frameworks like Autogen, Langchain, Semantic Kernel, and experience in chatbot development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 1 month ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled and experienced GenAI Data Scientist to join our team at Senior Associate level. As a GenAI Data Scientist, you will play a critical role in developing and implementing machine learning models and algorithms for our GenAI projects. The ideal candidate should have a strong background in data science, with a focus on GenAI technologies, and possess a solid understanding of statistical analysis, machine learning, data visualization, and application programming. Years of Experience: Candidates with 4+ years of hands on experience Responsibilities Collaborate with cross-functional teams to understand business requirements and identify opportunities for applying GenAI technologies. Develop and implement machine learning models and algorithms for GenAI projects. Perform data cleaning, preprocessing, and feature engineering to prepare data for analysis. Collaborate with data engineers to ensure efficient data processing and integration into machine learning pipelines. Validate and evaluate model performance using appropriate metrics and techniques. Develop and deploy production-ready machine learning applications and solutions. Utilize object-oriented programming skills to build robust and scalable software components. Utilize Kubernetes for container orchestration and deployment. Design and build chatbots using GenAI technologies. Communicate findings and insights to stakeholders through data visualizations, reports, and presentations. Stay up-to-date with the latest advancements in GenAI technologies and recommend innovative solutions to enhance data science processes. Requirements 3-5 years of relevant technical/technology experience, with a focus on GenAI projects. Strong programming skills in languages such as Python, R, or Scala. Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, or scikit-learn. Experience with data preprocessing, feature engineering, and data wrangling techniques. Solid understanding of statistical analysis, hypothesis testing, and experimental design. Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud. Knowledge of data visualization tools and techniques. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Experience with object-oriented programming languages such as Java, C++, or C#. Experience with developing and deploying machine learning applications in production environments. Understanding of data privacy and compliance regulations. Relevant certifications in data science or GenAI technologies. Nice To Have Skills Experience with Azure AI Search, Azure Doc Intelligence, Azure OpenAI, AWS Textract, AWS Open Search, AWS Bedrock. Familiarity with LLM backed agent frameworks such as Autogen, Langchain, Semantic Kernel, etc. Experience in chatbot design and development. Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA
Posted 1 month ago
4.0 years
0 Lacs
Gurugram, Haryana, India
Remote
About US:- We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Be a part of an Awesome Tribe Senior Developer/ Lead - Data Science Material+ is hiring for Lead Data Science, We are looking for Senior Developer/ Lead - Data Scientist with strong Generative AI experience, skilled in Python, TensorFlow/PyTorch/Scikit‑learn, Azure OpenAI GPT, and multi-agent frameworks like LangChain or AutoGen, with strong data preprocessing, feature engineering, and model evaluation expertise. Bonus : familiarity with Big Data tools (Spark, Hadoop, Databricks, SQL/NoSQL) and ReactJS Please find below the detailed job description and kindly go through the same for reference: - Immediate Joiner Required Minimum Experience: 4+ years in Senior Developer/ Lead - Data Scientist Preferred Location: - Gurgaon/ Bangalore Job Description: - Generative AI: GenAI models (Eg.: Azure OpenAI GPT), Multi-Agent System Architecture: Proficiency in Python and AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with LangChain , AutoGen , or similar frameworks for multi-agent systems. Strong knowledge of Data Science techniques, including data preprocessing, feature engineering, and model evaluation. Nice to have: Familiarity with Big Data tools (e.g., Spark, Hadoop) / Databricks and databases (e.g., SQL, NoSQL). Expertise in ReactJS for building responsive and interactive user interfaces. What We Offer Professional Development and Mentorship. Hybrid work mode with remote friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40+ Leaves per year along with maternity & paternity leaves. Wellness, meditation and Counselling sessions.
Posted 1 month ago
7.0 years
0 Lacs
Chandigarh, India
On-site
Skill Set Required: 2–7 years of experience in software engineering and ML development. Strong proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience building and evaluating models, along with data preprocessing and feature engineering. Proficiency in REST APIs, Docker, Git, and CI/CD tools. Solid foundation in software engineering principles, including data structures, algorithms, and design patterns. Hands-on experience with MLOps platforms (e.g., MLflow, TFX, Airflow, Kubeflow). Exposure to NLP, large language models (LLMs), or computer vision projects. Experience with cloud platforms (AWS, GCP, Azure) and managed ML services. Contributions to open-source ML libraries or participation in ML competitions (e.g., Kaggle, DrivenData) is a plus.
Posted 1 month ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Hi, We have an immediate requirement for HPC/AI Application Engineer position with our organization SHI Locuz Enterprise Solutions Pvt Ltd. PFB Job details: Work Location - Pune/Mumbai/Bangalore Work Experience - 6+years(relevant) Subject Matter Expert Skills Required- HPC Application Installation & Deployment, AI Application Deployment. PFB JD for your reference: Senior HPC/AI Applications Engineer Experienced HPC/AI Applications Engineer with 5+ years in High-performance computing and AI application deployment. Expert at architecting, optimizing, and benchmarking CPU/GPU-intensive environments, ensuring maximum efficiency in scientific and ML workloads. Mastery over Open-source and Commercial HPC/AI Applications. Deep experience installing, benchmarking, and fine-tuning open-source applications, libraries, and compilers across CPU and GPU platforms. Proficient deploying and optimizing and benchmarking scientific codes (WRF, OpenFOAM, LAMMPS, GROMACS, Quantum Espresso, VASP, NAMD, BLAST, GATK, Ansys, Abaqus, MATLAB, LS‑DYNA, Nastran, CAE/CFX) etc. Compiler & Library Optimization - Advanced user of Intel OneAPI, AOCC, NVIDIA HPC SDK, GNU, LLVM, PGI compilers, and MPI libraries (OpenMPI, MPICH, Intel MPI). Deep profiling insights via Nsight, VTune, PAPI. Expert in AI frameworks: TensorFlow (CPU/GPU), PyTorch, Keras, Theano, Caffe, cuDNN. Strong knowledge of NVIDIANGC, NIM & NeMo. Proficient with workload & resource managers (PBS, LSF, SLURM, Kubernetes). Knowledge of application installation tools source code, cmake, spack, easy build, mamba etc. Benchmarking experience in accelerated HPC: HPL, HPCG, STREAM and MLPerf and scientific applications. Skilled in NVIDIA GPU tuning, CUDA and NIM workflows, kernel optimization, memory throughput tuning, and multi-GPU scaling strategies. Knowledge of frameworks such as Hugging Face, OpenAI, or other GenAI platforms. Knowledge in data preprocessing and model evaluation tool. Fluent in Bash, Python, and other scripting languages to automate installation, deployment, performance testing, and administrative tasks. Strong interpersonal skills; versed in customer interaction, technical documentation, and collaboration with cross-functional teams.
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job Information Date Opened 07/01/2025 Job Type Full time Industry Technology Work Experience 5+ years City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600096 Job Description Job Description: Job Summary: We are looking for a skilled and driven Machine Learning Engineer with 4+ years of experience to design, develop, and deploy ML solutions across business-critical applications. You’ll work alongside data scientists, software engineers, and product teams to create scalable, high-impact machine learning systems in production environments. Key Responsibilities: Build, train, and deploy ML models for tasks such as classification, regression, NLP, recommendation, or computer vision. Convert data science prototypes into production-ready applications . Design and implement ETL/data pipelines to support model training and inference. Optimize models and pipelines for performance, scalability, and efficiency . Collaborate with DevOps/MLOps teams to deploy and monitor models in production. Continuously evaluate and retrain models based on data and performance feedback. Maintain thorough documentation of models, features, and processes. Required Qualifications: 4+ years of hands-on experience in machine learning , data science , or AI engineering roles. Proficiency in Python and ML frameworks such as scikit-learn, TensorFlow, PyTorch , or similar. Solid understanding of ML algorithms , data preprocessing, model evaluation, and tuning. Experience working with cloud platforms (AWS/GCP) and containerization tools (e.g., Docker). Familiarity with SQL and NoSQL databases , and working with large-scale datasets. Strong problem-solving skills and a collaborative mindset. Good to Have: Experience with MLOps tools (e.g., MLflow, SageMaker, Kubeflow). Knowledge of deep learning , transformers , or LLMs . Familiarity with streaming data (Kafka, Spark Streaming) and real-time inference . Exposure to ElasticSearch for data indexing or log analytics. Interest in applied AI/ML innovation within domains like fin-tech, healthcare, or ecommerce. Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field.
Posted 1 month ago
35.0 years
0 Lacs
Greater Kolkata Area
On-site
Key Responsibilities Team Leadership : Lead, guide, and mentor a team of AI/ML engineers and data scientists, fostering a collaborative, innovative, and productive work environment. Clean Code & Development : Write clean, maintainable, and error-free Python code for developing robust machine learning systems. Model Engineering : Efficiently design, build, and deploy machine learning models and end-to-end solutions. Cross-Functional Collaboration : Work closely with diverse technical teams to ensure timely and efficient delivery of AI/ML projects. Data Science Lifecycle : Implement end-to-end data science processes, including data preprocessing, exploratory analysis, modeling, evaluation, and deployment. Database Management : Work proficiently with SQL and NoSQL databases to manage and retrieve data effectively. API Development : Develop and integrate RESTful APIs using Python web frameworks (e.g., Flask, Django). Debugging & Optimization : Diagnose issues and optimize existing code for performance improvements. Agile Methodologies : Follow Agile/Scrum software development lifecycle practices consistently. LLM & RAG Implementation : Design, develop, fine-tune, and deploy AI models focusing on Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). Model Validation & Evaluation : Conduct training, validation, and evaluation experiments, refining models based on statistical analysis and performance metrics. Security & Compliance : Ensure that AI/ML solutions adhere to security best practices and data privacy regulations (e.g., GDPR). Technical Qualifications Python Expertise : 35 years of hands-on experience with Python programming. ML/DL Experience : 35 years of experience building, training, and deploying Machine Learning and Deep Learning models. Deep Learning Frameworks : Proficiency with TensorFlow, PyTorch, and Keras. Web Frameworks : Solid experience with Python web frameworks (Flask, Django) for API development. Core Libraries : Strong expertise in NumPy, Pandas, Matplotlib, Scikit-learn, SpaCy, NLTK, and HuggingFace Transformers. Algorithms & Techniques : Deep understanding of ML/DL algorithms, with advanced techniques in NLP and/or Computer Vision. Production-Grade AI : Proven experience designing and deploying AI solutions leveraging LLMs and RAG. NLP & Conversational AI : Hands-on expertise in text representation, semantic extraction, sentiment analysis, and conversational AI. Concurrency : Practical knowledge of multithreading, concurrency, and asynchronous programming in Python. Database Proficiency : Advanced skills in SQL (PostgreSQL, MySQL, SQLite, MariaDB) and NoSQL (MongoDB) databases. Deployment & MLOps : Experience with containerization (Docker/Kubernetes), cloud platforms (AWS, Azure, GCP), and familiarity with MLOps principles for CI/CD, monitoring, and logging. Version Control : Proficient understanding of Git and Bitbucket for software version control. C++ : Experience with C++ development is a plus. Soft Skills Analytical Thinking : Strong analytical and structured problem-solving capabilities. Proactive Problem-Solving : Logical reasoning and a hands-on approach to debugging and optimization. Team Collaboration : Effective interpersonal skills to work seamlessly with cross-functional teams. Clear Communication : Excellent ability to articulate technical concepts to diverse audiences. F- Continuous Learning : Commitment to staying current on emerging trends in AI/ML, NLP, and Computer Vision. (ref:hirist.tech)
Posted 1 month ago
0 years
0 Lacs
Sadar, Uttar Pradesh, India
On-site
About Veersa Veersa Technologies is a US-based IT services and AI enablement company founded in 2020, with a global delivery center in Noida (Sector 142). Founded by industry leaders with an impressive 85% YoY growth. A profitable company since inception. Team strength : Almost 400 professionals and growing rapidly. Our Services Include Digital & Software Solutions : Product Development, Legacy Modernization, Support. Data Engineering & AI Analytics : Predictive Analytics, AI/ML Use Cases, Data Visualization. Tools & Accelerators : AI/ML-embedded tools that integrate with client systems. Tech Portfolio Assessment : TCO analysis, modernization roadmaps, etc. Technology Stack AI/ML, IoT, Blockchain, MEAN/MERN stack, Python, GoLang, RoR, Java Spring Boot. Node.js Databases : PostgreSQL, MySQL, MS SQL, Oracle. Cloud : AWS & Azure (Serverless Architecture). About The Role We're seeking a skilled and motivated ML/AI Engineer to join our team and drive end-to-end AI development for cutting-edge healthcare prediction models. As part of our AI delivery team you will be working on designing, developing, and deploying ML models to solve complex business challenges. Key Responsibilities Design, develop, and deploy scalable machine learning models and AI solutions. Collaborate with engineers and product managers to understand business requirements and translate them into technical solutions. Analyse and preprocess large datasets for training and testing ML models. Experiment with different ML algorithms and techniques to improve model performance. Technical Skills Proficiency in programming languages such as Python & R Strong knowledge of classical machine learning algorithms with hands on experience in the following : Supervised (Classification model, regressions models) Unsupervised, (Clustering Algorithms, Autoencoders) Ensemble Models (Stacking, Bagging, Boosting techniques, Random Forest, XGBoost) Experience in data preprocessing, feature engineering, and handling large-scale datasets. Model evaluation techniques like accuracy, precision, recall, F1 score, AUC-ROC for classification, and MAE, MSE, RMSE, R-squared for regression. Explainable AI (XAI) techniques include methods like SHAP values, LIME, feature importance from decision trees, and partial dependence plots. Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, or Keras. Building & deploying Model APIs using framework like Flask, Fast API, Django, TensorFlow Serving etc. Knowledge of cloud platforms like Azure (preferred), AWS, GCP and experience deploying models in such environments. Nice To Have Object Oriented Programming. Familiarity with NLP, or time series analysis. Exposure to deep learning models (RNN, LSTM) and working with GPUs. (ref:hirist.tech)
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France