Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
India
Remote
Job Title: Machine Learning Developer Company: Lead India Location: Remote Job Type: Full-Time Salary: ₹3.5 LPA About Lead India: Lead India is a forward-thinking organization focused on creating social impact through technology, innovation, and data-driven solutions. We believe in empowering individuals and building platforms that make governance more participatory and transparent. Job Summary: We are looking for a Machine Learning Developer to join our remote team. You will be responsible for building and deploying predictive models, working with large datasets, and delivering intelligent solutions that enhance our platform’s capabilities and user experience. Key Responsibilities: Design and implement machine learning models for classification, regression, and clustering tasks Collect, clean, and preprocess data from various sources Evaluate model performance using appropriate metrics Deploy machine learning models into production environments Collaborate with data engineers, analysts, and software developers Continuously research and implement state-of-the-art ML techniques Maintain documentation for models, experiments, and code Required Skills and Qualifications: Bachelor’s degree in Computer Science, Data Science, or a related field (or equivalent practical experience) Solid understanding of machine learning algorithms and statistical techniques Hands-on experience with Python libraries such as scikit-learn, pandas, NumPy, and matplotlib Familiarity with Jupyter notebooks and experimentation workflows Experience working with datasets using tools like SQL or Excel Strong problem-solving skills and attention to detail Ability to work independently in a remote environment Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Exposure to cloud-based ML platforms (e.g., AWS SageMaker, Google Vertex AI) Understanding of model deployment using Flask, FastAPI, or Docker Knowledge of natural language processing or computer vision What We Offer: Fixed annual salary of ₹3.5 LPA 100% remote work and flexible hours Opportunity to work on impactful, mission-driven projects using real-world data Supportive and collaborative environment for continuous learning and innovation
Posted 6 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings from “HCL Software” Is a Product Development Division of HCL Tech!! HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. About Unica Product: - The HCL Unica+ Marketing Platform enables our customers to deliver precision and high-performance Marketing campaigns across multiple channels like social media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. Note: Are you available for a F2F Interview on 2nd August (Saturday) _ Hinjewadi, Pune. We are Seeking a Sr. & Lead Python Developer (Data Science, AI/ML) with +6 Yrs of Strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Qualifications & Skills: - At least 6-12 years. of Python Development Experience with at least 4 years in data science and machine learning. Experience with Customer Data Platforms (CDP) like Treasure Data, Epsilon, Tealium, Adobe, Salesforce is advantageous. Experience with AWS SageMaker is advantegous Experience with Lang Chain, RAG for Generative AI is advantageous. Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Knowledge of REST, JSON, XML, SOAP is a must Ability to work well within an agile team environment and applying the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Responsibilities: - Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms:In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Understanding of how to deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Travel: 30% +/- travel required Location: India (Pune preferred) Compensation: Base salary, plus bonus.
Posted 6 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose; To evaluate and validate the quality, accuracy, and relevance of the data and analytics outputs, and ensure they adhere to the standards and best To stay updated with the latest trends and developments in the data and analytics field, and leverage them to improve the team's capabilities and performance To lead and mentor a team of data analysts and engineers, and foster a culture of learning, innovation, and collaboration within the team and across the organization Key Accountabilities Tag Management • Own channel wise mapping and click to visit sanity • Implement best practices in Tag Management Adobe Analytics Implementation • Define and own dimensions and metrics • Define channel rules and custom implementation as per business objectives • Own GA/AA versus DB variance in transactions and revenue • Installing, configuring, testing, and troubleshooting Adobe software products such as Adobe Experience Manager, Adobe Analytics, Adobe Target, and Adobe Campaign. • Providing technical guidance and best practices for Adobe software implementation and integration. • Creating and maintaining documentation and reports on Adobe software performance, issues, and solutions. A/B Testing & Personalization • Drive the overall conversion optimization and landing page optimization agenda • Co-own traffic to lead conversion with Product Attribution and MIS Automation • Manage the complexity of multiple truths (Adwords, DBM, Facebook, Google analytics, Internal CRM) and converge on a single truth • Automation of critical dashboards for decision making and business insights Channel mix modelling & Data driven attribution • Answer the ultimate questions. How much to spend in which channel? • Who to market, when to spend and which channel to use? • How to successfully move from campaign to audience marketing? Other Competencies • Apply advanced statistical and analytical techniques, such as machine learning, predictive modeling, and optimization, to generate insights and solutions for complex business problems and opportunities Technical Competencies (Preferred domain knowledge) SQL: Competency to write and execute complex queries, join multiple tables, create views and functions, and optimize the performance of your database. Python: Ability to use Python for data manipulation, processing, and modeling, as well as for creating web applications and APIs. You should also be proficient in using libraries such as pandas, numpy, scipy, sklearn, matplotlib, seaborn, and flask. R: Use R for statistical analysis, data visualization, and machine learning. You should also be familiar with popular packages such as tidyverse, ggplot2, dplyr, tidyr, caret, and shiny. Tableau: Ability to create interactive dashboards and reports using Tableau, as well as connect to various data sources and perform data blending and aggregation. Power BI: Competency to use Power BI for data visualization and business intelligence, as well as create and share reports and dashboards using Power BI Desktop and Power BI Service. Familiar with web services and APIs such as REST, SOAP, and OAuth. Skills/Qualities Required Strong analytical and critical thinking skills, with proficiency in data analysis tools and techniques Excellent communication skills, capable of translating complex data into clear, actionable insights for non-technical stakeholders Detail-oriented with strong organizational skills, able to manage multiple projects and meet deadlines Keen interest in staying updated with the latest trends in data analytics and e-commerce
Posted 6 days ago
8.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Responsibilties : Knowledge of TPP industry– MF, PMS / AIF, Unlisted, LI, Bonds, etc. Data Reporting of TPP– MF, PMS / AIF, Unlisted, LI, Bonds, etc. Income reconciliation, AUM validation Raising invoice for PMS AIF Following up with AMCs for Brokerage, transaction and AUM related concerns KRA incentive validation Automate processes and publish dashboards Python, Excel reports, VBA Macros. Collaborate with business units to gather reporting requirements and translate them into technical solutions. Maintain documentation for all developed solutions and support end-user training. Ensure data integrity, security, and compliance with internal and external regulations. Education & Experience Postgraduate with overall experience of 8-10 years Experience in TPP, BI, data reporting Technical Skills Advanced Excel, VBA Macros, Proficient in Python – (NumPy, Pandas), and SQL Familiarity with Power BI Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and manage multiple tasks with tight deadlines
Posted 6 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: Persistent is scaling up its global Digital Trust practice. Digital Trust encompasses the domains of Data Privacy, Responsible AI (RAI), GRC (Governance, Risk & Compliance), and other related areas. This is a rapidly evolving domain globally that is at the intersection of technology, law, ethics, and compliance. Team members of this practice get an opportunity to work on innovative and cutting-edge solutions. We are looking for a highly motivated and technically skilled Responsible AI Testing Analyst with 1–3 years of experience to join our Digital Trust team. In this role, you will be responsible for conducting technical testing and validation of AI systems or agents against regulatory and ethical standards, such as the EU AI Act, AI Verify (Singapore), NIST AI RMF, and ISO 42001. This is a technical position requiring knowledge of AI/ML models, testing frameworks, fairness auditing, explainability techniques, and regulatory understanding of Responsible AI. Role: AI Testing Analyst Location: All PSL Location Experience: 1-3 years Job Type: Full Time Employment What You’ll Do: Perform technical testing of AI systems and agents using pre-defined test cases aligned with regulatory and ethical standards. Conduct model testing for risks such as bias, robustness, explainability, and data drift using AI assurance tools or libraries. Support the execution of AI impact assessments and document the test results for internal and regulatory audits. Collaborate with stakeholders to define assurance metrics and ensure adherence to RAI principles. Assist in setting up automated pipelines for continuous testing and monitoring of AI/ML models. Prepare compliance-aligned reports and dashboards showcasing test results and conformance to RAI principles. Expertise You’ll Bring : 1 to 3 years of hands-on experience in AI/ML model testing, validation, or AI assurance roles. Experience with testing AI principles such as fairness, bias detection, robustness, accuracy, explainability, and human oversight. Practical experience with tools like AI Fairness 360, SHAP, LIME, What-If Tool, or commercial RAI platforms Ability to run basic model tests using Python libraries (e.g., scikit-learn, pandas, numpy, tensorflow/keras, PyTorch). Understanding of regulatory implications of high-risk AI systems and how to test for compliance. Strong documentation skills to communicate test findings in an auditable and regulatory-compliant manner. Preferred Certifications (any one or more): AI Verify testing framework training (preferred) IBM AI Fairness 360 Toolkit Certification AI Certification (Google Cloud) – Vertex AI + SHAP/LIME ModelOps/MLOps Monitoring with Bias Detection – AWS SageMaker / Azure ML / GCP Vertex AI TensorFlow Developer / Python for Data Science and AI / Applied Machine Learning in Python Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 6 days ago
0.0 - 3.0 years
6 - 8 Lacs
Jntu Kukat Pally, Hyderabad, Telangana
On-site
Python + Frontend Trainer (Full-Time ) Location: Manjeera Trinity, JNTU, Hyderabad Joining: Immediate Joiners Preferred Company: Greatcoder Trainings LLP About Us: Greatcoder Trainings LLP is a fast-growing tech training institute known for delivering real-time project-based training and offering 100% placement support. We are currently hiring a passionate and skilled Python + Frontend Trainer to join our expert team. Key Responsibilities: Conduct in-person training sessions on Python and Frontend technologies Prepare and deliver course materials, assignments, and real-time projects Resolve student queries and provide hands-on guidance Track student performance and offer constructive feedback Assist in mock interviews and assessments Required Skills: Core Python: Variables, Data Types, Loops, Functions, Modules Object-Oriented Programming in Python File Handling and Exception Handling Python Libraries: NumPy, Pandas, Matplotlib (preferred) Frameworks: Django or Flask (any one is a must) Basic understanding of REST APIs Frontend: HTML, CSS, JavaScript ReactJS: Practical and teaching experience Strong communication and presentation abilities Preferred Qualifications: Prior experience in teaching/training (offline or online) Bachelor’s degree in Computer Science or related field (preferred but not mandatory) Benefits: Flexible work modes (Full-time) Health insurance Real-time project exposure Friendly and growth-focused work culture Competitive salary Job Details: Job Type: Full-time Pay: ₹6,00,000 – ₹8,00,000 per year Schedule: Day shift Language: English (Preferred) Work Location: In person (Manjeera Trinity, JNTU, Hyderabad) Apply Now: Send your resume to bhupesh@thegreatcoder.com Contact: 9032190326 Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Jntu Kukat Pally, Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 3 years (Preferred) Language: Telugu (Preferred) English (Preferred) Work Location: In person
Posted 6 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Roles & Responsibilities: Professional Skill: Business Analysis, Analytical Thinking, Problem Solving, Decision Making, Leadership, Managerial, Time Management, Domain Knowledge Work simplification - methods that maximize output while minimizing expenditure and cost. Analytics with Data - interprets data and turns it into information which can offer ways to improve a business Communication - Good verbal communication and interpersonal skills are essential for collaborating with customers Technical Skills: Python/Numpy, Seaborn, Pandas, Selenium, Beautiful Soup (basic), Spotfire, ML Libraries, RPA, R, Iron-Python, Html CSS, Javascript, SQL, HQL, Git/Gitlabee, Spark, Scala, Webservices, Spotfire/ Tableau, JIRA Tool Skill: Project management tools, Documentation tools, Modeling [wireframe] tools Database Skills: MsSQL, Postgres, MsAccess, Mongo DB Rigorous - The ability to analyse qualitative data quickly and rigorously Adaptability - Being able to adapt to changing environments and work processes Experience : Minimum 8 years of experience in Data Science / Preferable in Automobile Engineering Domain What we look from candidate who has matching skills on below. Key Skills: DAO - AI/ML, Automation, Python, big query. Project Management*, Agile, SDLC. Location : Chennai - Mahindra World City Campus. Work Mode : Hybrid
Posted 6 days ago
0 years
0 Lacs
India
On-site
#Data Scientist #Data analysis #Retrieval-Augmented Generation (RAG) #Dataanalysis #EDA # #NumPy #scikit-learn #pandas #NLP #NLR #FAISS #AWS #BERT #Python #scikit Job Overview: • Build, train, and validate machine learning models for prediction, classification, and clustering to support NBA use cases. • Conduct exploratory data analysis (EDA) on both structured and unstructured data to extract actionable insights and identify behavioral drivers. • Design and deploy A/B testing frameworks and build pipelines for model evaluation and continuous monitoring. • Develop vectorization and embedding pipelines using models like Word2Vec, BERT, to enable semantic understanding and similarity search. • Implement Retrieval-Augmented Generation (RAG) workflows to enrich recommendations by integrating internal and external knowledge bases. • Collaborate with cross-functional teams (engineering, product, marketing) to deliver data-driven Next Best Action strategies. • Present findings and recommendations clearly to technical and non-technical stakeholders. Required Skills & Experience: • Strong programming skills in Python, including libraries like pandas, NumPy, and scikit-learn. • Practical experience with text vectorization and embedding generation (Word2Vec, BERT, SBERT, etc.). • Proficiency in Prompt Engineering and hands-on experience in building RAG pipelines using LangChain, Haystack, or custom frameworks. • Familiarity with vector databases (e.g., PostgreSQL with pgvector, FAISS, Pinecone, Weaviate). • Expertise in Natural Language Processing (NLP) tasks such as NER, text classification, and topic modeling. • Sound understanding of supervised learning, recommendation systems, and classification algorithms. • Exposure to cloud platforms (AWS, GCP, Azure) and containerization tools (Docker, Kubernetes) is a plus.
Posted 6 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Lead Python Software Engineer to join its team of experts. Skill : Lead Python Software Engineer Exp: 5+Yrs NP: Immediate to 15Days Location: Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, start-ups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 6 days ago
0.6 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company: Omnipresent Robot Technologies Pvt. Ltd. Location: Noida Sector-80| Type: Full-Time About Us: Omnipresent Robot Tech Pvt. Ltd. is an innovative startup pushing the boundaries of robotics, drones, and space tech. We recently contributed to ISRO’s Chandrayaan-3 mission by developing the perception and navigation module for the Pragyaan rover. Join our dynamic team to work on satellite-based defense projects and grow your career! Position Overview: We are looking for an AI/ML Engineers for Senior and junior role to assist in the development of AI models and algorithms for our satellite-based defense project. You will work with a skilled team to train, test, and deploy ML models, gaining hands-on experience in cutting-edge AI applications. Key Responsibilities: • Assist in designing and developing AI models using ML/DL techniques. • Implement, test, and fine-tune ML models using popular frameworks (e.g., TensorFlow, PyTorch). • Load and deploy models on embedded platforms (like Jetson Orin NX). • Analyze datasets, preprocess data, and extract features for training. • Support code compatibility and optimization on embedded systems. • Monitor and evaluate model performance, suggesting improvements. • Collaborate with senior engineers to integrate AI models into production environments. • Stay updated on the latest AI trends and apply new techniques to projects. Qualifications: • B.Tech. in Computer Science, IT, or related field. • 0.6-5 years of experience in ML model development. • Proficiency in Python and familiarity with ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). • Understanding of data preprocessing, model training, and deployment. • Basic knowledge of GPU acceleration (CUDA) and embedded platforms (Jetson Orin NX). • Familiarity with data processing tools (e.g., NumPy, Pandas). • Strong problem-solving and analytical skills. • Effective communication and team collaboration abilities. Why Join Us? • Be part of high-impact satellite defense projects. • Learn from experts in AI and embedded systems. • Work in a start-up environment that fosters innovation and creativity.
Posted 6 days ago
0 years
0 Lacs
India
On-site
We are hring Software Engineers @ Hyderabad location | Experience : 4-5 Yrs | Education : CS or EE/CE degree | NP : 30 Days Mandatory skills: ML, C++ - Must Hands on experience in C/CPP, Python, NumPy, open CLC++ Hand on experience in ML frameworks like TensorFlow, PyTorch and ONNX Hand on experience in TVM REQUIRED KNOWLEDGE, SKILLS, AND ABILITIES: Hands on experience in C, C++, Python, NumPy, ML frameworks like TensorFlow, PyTorch, ONNX and others. Good knowledge of Linear algebra Knowledge of NWs optimization, graph lowering and finetuning Good analytical skills Good understanding of algorithms, OOPS concepts and SW Design Patterns. Good debugging skills. Strong knowledge of TVM FW Experience or knowledge in HW Architecture is an added advantage. Experience on full stack framework development l ike any of Multimedia frameworks, GStreamer, OpenVx, OpenMax, OpenGL, OpenGL-ES, Vulkun, Mesa, etc. is a plus. Experience on driver development on Linux platform . CLC/assembly compute kernels JOB RESPONSIBILITIES: Bring up, test and debug neural networks using ML frameworks like TensorFlow, PyTorch, ONNX etc Bring up and enhance TVM features Analyze and enhance efficiency & stability of neural networks. Develop & maintain Model Conversion Tool software stack Network Optimization, Node fusion, graph lowering, adding custom operations, profiling & performance fine tuning.
Posted 6 days ago
4.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Analyst – Python (4 to 5 Years Experience) Location: Hyderabad Employment Type: Full-time Job Summary: We are seeking an experienced Data Analyst with strong expertise in Python to join our analytics team. The ideal candidate will have 4–5 years of experience in data analysis, data wrangling, and reporting using Python and other relevant tools. You will work closely with cross-functional teams to deliver actionable insights and support data-driven decision-making. Key Responsibilities: Analyze large and complex datasets using Python (pandas, NumPy, etc.) Design and implement data pipelines for data cleaning, transformation, and analysis Create visualizations and dashboards using tools like Power BI, Tableau, or Python (Matplotlib, Seaborn, Plotly) Collaborate with business stakeholders to gather requirements and translate them into data solutions Develop and maintain automated reports and dashboards Perform root cause analysis and identify trends, patterns, and anomalies in data Support A/B testing, forecasting, and statistical modeling as needed Document data analysis processes and code for reproducibility and knowledge sharing Required Skills: Strong proficiency in Python for data analysis (pandas, NumPy, matplotlib/seaborn) Solid understanding of SQL for querying relational databases Experience with data visualization tools (Power BI, Tableau, or Python-based visualizations) Good understanding of data cleaning, manipulation, and wrangling techniques Familiarity with version control systems like Git Strong analytical thinking and problem-solving skills Ability to communicate insights clearly to both technical and non-technical stakeholders Preferred Skills: Experience with cloud platforms (AWS, GCP, Azure) and data tools like BigQuery or Snowflake Knowledge of machine learning basics and tools (scikit-learn, etc.) Experience working in an Agile environment Understanding of business KPIs, reporting metrics, and dashboards Educational Qualification: Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, Data Science, or a related field Experience: 4 to 5 years of hands-on experience in data analysis and Python programming
Posted 6 days ago
5.0 years
0 Lacs
India
Remote
About Company: They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. Job Description: Job Title: LLM CUDA/C++ and Python Developer Location: Pan India Experience: 6+ yrs. Employment Type: Contract to hire Work Mode: Remote Notice Period: Immediate joiners Role Overview: This role is part of a project supporting leading LLM companies. The primary objective is to help these foundational LLM companies improve their Large Language Models.We support companies in enhancing their models by offering high-quality proprietary data. This data can be used as a basis for fine-tuning models or as an evaluation set to benchmark the performance. In an SFT data generation workflow, you might have to put together a prompt that contains code and questions, then elaborate model responses, and translate the provided CUDA/C++ code into equivalent Python code using PyTorch and NumPy to replicate the algorithm's behavior.For RLHF data generation, you may need to create a prompt or use one provided by the customer, ask the model questions, and evaluate the outputs generated by different versions of the LLM, comparing it and providing feedback, which is then used in fine-tune processes. Please note that this role does not involve building or fine-tuning LLMs. What does day-to-day look like: ● Translate CUDA/C++ code into equivalent Python implementations using PyTorch and NumPy, ensuring logical and performance parity. ● Analyze CUDA kernels and GPU-accelerated code for structure, efficiency, and function before translation. ● Evaluate LLM-generated translations of CUDA/C++ code to Python, providing technical feedback and corrections. ● Collaborate with prompt engineers and researchers to develop test prompts that reflect real-world CUDA/PyTorch tasks. ● Participate in RLHF workflows, ranking LLM responses and justifying ranking decisions clearly. ● Debug and review translated Python code for correctness, readability, and consistency with industry standards. ● Maintain technical documentation to support reproducibility and code clarity. ● Propose enhancements to prompt structure or conversion approaches based on common LLM failure patterns. Requirements: ● 5+ years of overall work experience, with at least 3 years of relevant experience in Python and 2+ years in CUDA/C++. ● Strong hands-on experience with Python, especially in scientific computing using PyTorch and NumPy. ● Solid understanding of CUDA programming concepts and C++ fundamentals. ● Demonstrated ability to analyze CUDA kernels and accurately reproduce them in Python. ● Familiarity with GPU computation, parallelism, and performance-aware coding practices. ● Strong debugging skills and attention to numerical consistency when porting logic across languages. ● Experience evaluating AI-generated code or participating in LLM tuning is a plus. ● Ability to communicate technical feedback clearly and constructively. ● Fluent in conversational and written English communication skills.
Posted 6 days ago
0.0 - 5.0 years
0 - 0 Lacs
Hyderabad, Telangana
On-site
Job Title: Python + AIML Developer Location: Hyderabad (On-Site) Job Type: Full-Time Experience: 4 to 7 Years Notice Period: Immediate to 15 Days Job Summary We are looking for a talented and motivated Python Developer with strong experience in building APIs using FastAPI and Flask. The ideal candidate will possess excellent problem-solving and communication skills and a passion for delivering high-quality, scalable backend solutions. You will play a key role in developing robust backend services, integrating APIs, and collaborating with frontend and QA teams to deliver production-ready software. Key Responsibilities Design, develop, and maintain backend services using FastAPI and Flask. Write clean, reusable, and efficient Python code following best practices. Work with Large Language Models (LLMs) and contribute to building advanced AI-driven solutions. Collaborate with cross-functional teams to gather requirements and translate them into technical implementations. Optimize applications for maximum speed, scalability, and reliability. Implement secure API solutions and ensure compliance with data protection standards. Develop and maintain unit tests, integration tests, and documentation for code, APIs, and system architecture. Participate in code reviews and contribute to continuous improvement of development processes. Required Skills & Qualifications Strong programming skills in Python with hands-on experience in backend development. Proficiency in developing RESTful APIs using FastAPI and Flask frameworks. Solid understanding of REST principles and asynchronous programming in Python. Good communication skills and the ability to troubleshoot and solve complex problems effectively. Experience with version control tools like Git. Eagerness to learn and work with LLMs, Vector Databases, and other modern AI technologies. Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Nice to Have Experience with LLMs, Prompt Engineering, and Vector Databases. Understanding of Transformer architecture, Embeddings, and Retrieval-Augmented Generation (RAG). Familiarity with data processing libraries like NumPy and Pandas. Knowledge of Docker for containerized application development and deployment. Skills Python, FastAPI, Flask, REST APIs, Asynchronous Programming, Git, API Security, Data Protection, LLMs, Vector DBs, Transformers, RAG, NumPy, Pandas, Docker. If you are passionate about backend development and eager to work on innovative AI solutions, we would love to hear from you! Job Type: Full-time Pay: ₹10,764.55 - ₹65,865.68 per month Benefits: Flexible schedule Health insurance Paid time off Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: Coding: 5 years (Preferred) Flask Api: 7 years (Required) Rest API: 7 years (Required) Git: 5 years (Required) Pandas/Numpy/Dockers: 7 years (Required) Python developer: 7 years (Required) AIML: 5 years (Required) Machine learning: 5 years (Required) Fast API: 5 years (Required) Generative AI: 5 years (Required) NLP: 5 years (Required) LLM: 5 years (Required) Work Location: In person
Posted 6 days ago
0.0 - 4.0 years
0 Lacs
Karnataka
On-site
Bengaluru, Karnataka, India Sub Business Unit Engineer Job posted on Jul 28, 2025 Employee Type Permanent Experience range (Years) 2 years - 4 years Core Qualifications Experience: 2-4 years of professional experience in the full software development lifecycle. Python Proficiency: Solid, hands-on coding experience in Python. Computer Science Fundamentals: A deep and practical understanding of Data Structures, Algorithms, and Software Design Patterns. Version Control: Proficiency with Git or other distributed version control systems. Analytical Mindset: Excellent analytical, debugging, and problem-solving abilities. Java Knowledge: Basic understanding of Java concepts and syntax. Testing : Familiarity with testing tools and a commitment to Test-Driven Development (TDD) principles. Education: Bachelor's degree in Computer Science, Computer Engineering, or a related technical field. Good to have Generative AI : Experience with frameworks like LangChain, Google ADK/Autogen, or direct experience with APIs from OpenAI, Google (Gemini), or other LLM providers. Web Stacks : Strong knowledge of Python web frameworks like Flask or FastAPI, Celery etc. Data Engineering : Hands-on experience with NumPy, Pandas/Polars, data pipeline tools (e.g., Apache Spark, Kafka), and visualization. Databases : Proficiency with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Elasticsearch, Redis) databases. DevOps & Cloud : Experience with AWS (including EC2, Lambda, EKS/ECS), Docker, and CI/CD best practices and pipelines (e.g., GitLab CI). Operating Systems : Good working knowledge of programming in a UNIX/Linux environment. FinTech Domain : Prior experience or interest in the financial technology sector is a plus. Reporting to Technical Lead
Posted 6 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking an experienced and driven Senior AI/ML Engineer with 5-8 years of experience in AI/ML – Predictive ML, GenAI and Agentic AI. The ideal candidate should have a strong background in developing and deploying machine learning models, as well as a passion for innovation and problem-solving. As an AI/ML Solution Engineer, build AI/ML, Gen AI, Agentic AI empowered practical in-depth solutions for solving customer’s business problems. As an AI/ML Solution Engineer, design, develop, and deploy machine learning models and algorithms. Conduct research and stay up-to-date with the latest advancements in AI/ML, GenAI, and Agentic AI. Lead a team of junior AI engineers, providing direction and support. Requirements Bachelor’s or Master’s degree in computer science / AIML / Data Science. 5 to 8 years of overall experience and hands-on experience with the design and implementation of Machine Learning models, Deep Learning models, and LLM models for solving business problems. Proven experience working with Generative AI technologies, including prompt engineering, fine-tuning large language models (LLMs), embeddings, vector databases (e.g., FAISS, Pinecone), and Retrieval-Augmented Generation (RAG) systems. Expertise in Python (NumPy, Scikit-learn, Pandas), TensorFlow, PyTorch, transformers (e.g., Hugging Face), or MLlib. Experience in working on Agentic AI frameworks like Autogen, Langgraph, CrewAI, Agentforce etc Expertise in cloud-based data and AI solution design and implementation using GCP / AWS / Azure, including the use of their Gen AI services. Good experience in building complex and scalable ML and Gen AI solutions and deploying them into production environments. Experience with scripting in SQL, extracting large datasets, and designing ETL flows. Excellent problem-solving and analytical skills with the ability to translate business requirements into data science and Gen AI solutions. Effective communication skills, with the ability to convey complex technical concepts to both technical and non-technical stakeholders.
Posted 1 week ago
5.0 years
7 - 9 Lacs
Bengaluru
On-site
About us At ExxonMobil, our vision is to lead in energy innovations that advance modern living and a net-zero future. As one of the world’s largest publicly traded energy and chemical companies, we are powered by a unique and diverse workforce fueled by the pride in what we do and what we stand for. The success of our Upstream, Product Solutions and Low Carbon Solutions businesses is the result of the talent, curiosity and drive of our people. They bring solutions every day to optimize our strategy in energy, chemicals, lubricants and lower-emissions technologies. We invite you to bring your ideas to ExxonMobil to help create sustainable solutions that improve quality of life and meet society’s evolving needs. Learn more about our What and our Why and how we can work together . What role you will play in our team We are seeking candidates to tackle and lead challenging commercial problems in pricing, marketing, sales, transportation, storage, and distribution of hydrocarbons. The ideal candidate will understand both the commercial/economic and technical aspects of the oil and gas sector and will be able to formulate and solve oil and gas industry problems using machine learning, time series forecasting, statistical analysis, and programming skills. What you will do Collaborate with data scientists, data analysts, software developers, and business representatives from chemical, lubricants, and fuels value chains globally to develop, deliver, and apply data-driven tools, models, or software to support our businesses. Utilize machine learning, time series analysis, pattern recognition, statistical analysis, design of experiments, and data visualizations, along with domain knowledge, to solve commercial problems and provide business insights. Design, build, and execute studies using proprietary or commercial tools to provide insights, including calibrating models to pricing/sales/demand data and providing optimized recommendations. About You Skills and Qualifications Master’s, or PhD degree from a recognized university in Data Science, Computer Science, IT, Applied Mathematics, Statistics, Engineering, or related disciplines with a minimum GPA of 7.0 (out of 10.0). At least 5 years of experience in developing, applying, and validating data-driven tools to model complex systems. In-depth knowledge and practical experience in statistical analysis techniques (e.g., classification, regression, time-series, Bayesian techniques) and machine learning techniques (e.g., decision trees, ensemble methods, deep learning, neural networks, validation methods). Practical experience in the full machine learning lifecycle, from problem formulation and data acquisition to model building and deployment at enterprise scale. Deep conceptual and mathematical understanding of algorithms, models, model assessment techniques, solution development and explainable AI. Extensive knowledge and experience in code design, testing, and ML Ops practices. Specialization in at least one sub-domain, such as time series forecasting, Bayesian skills or NLP-GenAI. Proficiency in Python, including packages such as NumPy, pandas, scikit-learn, Keras, TensorFlow, and PyTorch. Experience with software engineering practices, agile methodologies, DevOps, and version control. Experience working with Azure Databricks or other data science frameworks. Familiarity with software testing and development practices (Agile). Experience with data visualization tools (e.g., Tableau, Power BI). Preferred Qualifications / Experience Commercial experience encompassing pricing, demand forecasting, and end-to-end value chain optimization are preferred. Prior experience in commercial software development or working in commercial software teams. Ability to identify and scope data science opportunities based on business needs. Strong communication and interpersonal skills, with the ability to work collaboratively in a global team environment. Excellent problem-solving skills and attention to detail. Your benefits An ExxonMobil career is one designed to last. Our commitment to you runs deep our employees grow personally and professionally, with benefits built on our core categories of health, security, finance and life. We offer you: Competitive compensation Medical plans, maternity leave and benefits, life, accidental death and dismemberment benefits Retirement benefits Global networking & cross-functional opportunities Annual vacations & holidays Day care assistance program Training and development program Tuition assistance program Workplace flexibility policy Relocation program Transportation facility Please note benefits may change from time to time without notice, subject to applicable laws. The benefits programs are based on the Company’s eligibility guidelines. Stay connected with us Learn more about ExxonMobil in India, visit ExxonMobil India and Energy Factor India . Follow us on LinkedIn and ExxonMobil (@exxonmobil) • Instagram photos and videos Like us on Facebook Subscribe our channel at YouTube EEO Statement ExxonMobil is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, national origin or disability status. Business solicitation and recruiting scams ExxonMobil does not use recruiting or placement agencies that charge candidates an advance fee of any kind (e.g., placement fees, immigration processing fees, etc.). Follow the LINK to understand more about recruitment scams in the name of ExxonMobil. Nothing herein is intended to override the corporate separateness of local entities. Working relationships discussed herein do not necessarily represent a reporting connection, but may reflect a functional guidance, stewardship, or service relationship. Exxon Mobil Corporation has numerous affiliates, many with names that include ExxonMobil, Exxon, Esso and Mobil. For convenience and simplicity, those terms and terms like corporation, company, our, we and its are sometimes used as abbreviated references to specific affiliates or affiliate groups. Abbreviated references describing global or regional operational organizations and global or regional business lines are also sometimes used for convenience and simplicity. Similarly, ExxonMobil has business relationships with thousands of customers, suppliers, governments, and others. For convenience and simplicity, words like venture, joint venture, partnership, co-venturer, and partner are used to indicate business relationships involving common activities and interests, and those words may not indicate precise legal relationships.
Posted 1 week ago
4.0 years
40 - 50 Lacs
Madurai, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
0 years
0 Lacs
Bengaluru
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Cognitive Service and IBM AI Developer Required Skills on Cognitive Services, IBM and Python Natural Language Processing (NLP) Intent recognition and entity extraction NLP libraries: spaCy, Rasa NLU, NLTK, Transformers (Hugging Face) Language model fine-tuning (optional for advanced bots) Conversational Design Dialogue flow design Context management Multi-turn conversation handling Backend Development Programming languages: Python, Node.js Frameworks: IBM Watson/ Rasa/Botpress/Microsoft Bot Framework (self-hosted) API development and integration (RESTful APIs) Frontend & UI Integration Web chat UIs (React, Angular, Vue) Messaging platform integration (e.g., custom web chat, WhatsApp via Twilio, etc.) Voice interface (optional): integration with speech-to-text and text-to-speech engines Watson Assistant: Designing, training, and deploying conversational agents (chatbots and virtual assistants). Watson Discovery: Implementing intelligent document search and insights extraction. Watson Natural Language Understanding (NLU): Sentiment analysis, emotion detection, and entity extraction. Watson Speech Services: Speech-to-text and text-to-speech integration. Watson Knowledge Studio: Building custom machine learning models for domain-specific NLP. Watson OpenScale: Monitoring AI model performance, fairness, and explainability. Core Python: Data structures, OOP, exception handling. API Integration: Consuming REST APIs (e.g., Azure SDKs). Data Processing: Using pandas, NumPy, and json for data manipulation. AI/ML Libraries: Familiarity with scikit-learn, transformers, or OpenAI SDKs. Required Skills on Other Skills (Front End) Component-Based Architecture: Building reusable functional components. Hooks: useState, useEffect, useContext, and custom hooks. State Management: Context API, Redux Toolkit, or Zustand. API Integration: Fetching and displaying data from API services. Soft Skills Excellent Communication Skills Team Player Self-starter and highly motivated Ability to handle high pressure and fast paced situations Excellent presentation skills Ability to work with globally distributed teams Roles and Responsibilities: Understand existing application architecture and solution design Design individual components and develop the components Work with other architects, leads, team members in an agile scrum environment Hands on development Design and develop applications that can be hosted on Azure cloud Design and develop framework and core functionality Identify the gaps and come up with working solutions Understand enterprise application design framework and processes Lead or Mentor junior and/or mid-level developers Review code and establish best practices Look out for latest technologies and match up with EY use case and solve business problems efficiently Ability to look at the big picture Proven experience in designing highly secured and scalable web applications on Azure cloud Keep management up to date with the progress Work under Agile design, development framework Good hands on development experience required EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 week ago
4.0 years
40 - 50 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
0 years
9 - 12 Lacs
Surat
On-site
- Mandatory technologies with hands-on exp. : Python, Django, Fast API, PostgreSQL, Pandas, Numpy, Docker, Git; - Should have worked with Django (min. 3 yrs) & Fast API (around 1 yr), - Job timing is 4 PM to 1 AM (US time) (1 hour break), - Working days are Monday to Friday, Job Type: Full-time Pay: ₹80,000.00 - ₹100,000.00 per month Work Location: In person
Posted 1 week ago
4.0 years
40 - 50 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Scientist/ML Scientist About Auxia Auxia is an AI-powered Growth and Personalization Platform that is reinventing how companies activate, engage, retain and monetize their customers. Auxia’s software delivers real-time personalization using ML that treats each customer as a unique individual, effectively creating a “cohort-of-one”. With Auxia, hundreds of personalized campaigns can be generated and deployed in minutes, doing away with static and tedious, rules-based segmentation and targeting. Auxia is easy to use, integrates with most major data and customer engagement tools, and frees up valuable internal resources. Our team started Auxia based on our collective expertise in advancing product growth at companies like Google, Meta, and Lyft. We saw a paradigm shift unfolding where the most successful teams drove significant revenue improvements and cost reductions by shifting from static, rules-based segmentation and targeting to real-time decision-making driven by machine learning. About the role As the core member of the Auxia Data Science team, you will play a significant role in driving the data science vision/roadmap and also in building and improving our Machine Learning models and Agentic AI products. You will perform research at the intersection of recommender systems, causal inference, transformers, foundational models (LLMs), content understanding and reinforcement learning and develop production grade models serving decisions to 10s of millions of users per day. You will work at the intersection of artificial intelligence, process automation, and workflow optimization to create intelligent agents that can understand objectives, make decisions, and adapt to changing requirements in real-time. You will also be responsible for performing in-depth analysis of customer data and derive insights to better understand customer problems and collaborate cross functionally across engineering, product and business teams. Responsibilities Design, implement, and research machine learning algorithms to solve growth and personalization problems and continuously fine-tune and improve them Design and develop AI-driven autonomous agents to execute complex workflows across various applications and objectives Implement and improve offline model evaluation methods Analyze large business datasets to extract knowledge and communicate insights to critical business questions by leveraging traditional statistical or machine learning methodologies Present and communicate data-based insights and recommendations to product, growth and data science teams at Auxia’s customers Stay updated on ML research through literature reviews, conferences, and networking Qualifications Master’s or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field. 7+ years experience in training and improving and deploying ML models, developing production software systems such as data pipelines or dashboards preferably at fast-growing technology companies Proficiency in ML frameworks (TensorFlow/PyTorch/Jax) and the Python data science ecosystem (Numpy, SciPy, Pandas, etc.) Experience with LLM, RAG, information retrieval etc in building production grade solutions Experience in recommender systems, feed ranking and optimization Experience in causal inference techniques and experimental design Experience working with Cloud services consuming and managing large scale data sets Thrives in ambiguity, ownership mindset, and willing to ‘get your hands dirty’ outside of the scope of your role (e.g. Product vision, strategic roadmap, engineering etc)
Posted 1 week ago
4.0 years
40 - 50 Lacs
Greater Delhi Area
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
4.0 years
40 - 50 Lacs
Faridabad, Haryana, India
Remote
Experience : 4.00 + years Salary : INR 4000000-5000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: Customer-Centric Approach, NumPy, OpenCV, PIL, PyTorch Crop.Photo is Looking for: Our engineers don’t just write code. They frame product logic, shape UX behavior, and ship features. No PMs handing down tickets. No design handoffs. If you think like an owner and love combining deep ML logic with hard product edges — this role is for you. You’ll be working on systems focused on the transformation and generation of millions of visual assets for small-to-large enterprises at scale. What You’ll Do Build and own AI-backed features end to end, from ideation to production — including layout logic, smart cropping, visual enhancement, out-painting and GenAI workflows for background fills Design scalable APIs that wrap vision models like BiRefNet, YOLOv8, Grounding DINO, SAM, CLIP, ControlNet, etc., into batch and real-time pipelines. Write production-grade Python code to manipulate and transform image data using NumPy, OpenCV (cv2), PIL, and PyTorch. Handle pixel-level transformations — from custom masks and color space conversions to geometric warps and contour ops — with speed and precision. Integrate your models into our production web app (AWS based Python/Java backend) and optimize them for latency, memory, and throughput Frame problems when specs are vague — you’ll help define what “good” looks like, and then build it Collaborate with product, UX, and other engineers without relying on formal handoffs — you own your domain What You’ll Need 2–3 years of hands-on experience with vision and image generation models such as YOLO, Grounding DINO, SAM, CLIP, Stable Diffusion, VITON, or TryOnGAN — including experience with inpainting and outpainting workflows using Stable Diffusion pipelines (e.g., Diffusers, InvokeAI, or custom-built solutions) Strong hands-on knowledge of NumPy, OpenCV, PIL, PyTorch, and image visualization/debugging techniques. 1–2 years of experience working with popular LLM APIs such as OpenAI, Anthropic, Gemini and how to compose multi-modal pipelines Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. Experience solving real-world visual problems like object detection, segmentation, composition, or enhancement. Ability to debug and diagnose visual output errors — e.g., weird segmentation artifacts, off-center crops, broken masks. Deep understanding of image processing in Python: array slicing, color formats, augmentation, geometric transforms, contour detection, etc. Experience building and deploying FastAPI services and containerizing them with Docker for AWS-based infra (ECS, EC2/GPU, Lambda). Solid grasp of production model integration — model loading, GPU/CPU optimization, async inference, caching, and batch processing. A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Who You Are You’ve built systems — not just prototypes You care about both ML results and the system’s behavior in production You’re comfortable taking a rough business goal and shaping the technical path to get there You’re energized by product-focused AI work — things that users feel and rely on You’ve worked in or want to work in a startup-grade environment: messy, fast, and impactful How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough