Jobs
Interviews

8735 Pytorch Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

India

Remote

The Computer Vision Consultant will play a pivotal role in developing and optimizing algorithms for image processing and machine learning applications. This role demands a strong background in programming, image processing techniques, and deep learning methodologies. The ideal candidate will have a demonstrated track record of utilizing various tools and libraries to enhance computer vision projects. Main Responsibilities: As a Computer Vision Consultant, your core duties will include: Developing and deploying computer vision algorithms using Python. Implementing graphical user interfaces with PyQT. Utilizing CUDA and GPUs for advanced algorithm training. Conducting image processing tasks including edge detection and camera calibration. Applying deep learning frameworks for object detection and segmentation. Key Requirements: 3 to 5 years of experience in Python programming. Proficient in OOPS concepts and threading. Hands-on experience with Numpy, Scipy, and OpenCV libraries. Familiarity with 2D/3D LIDAR data analysis. Experience with ML libraries such as TensorFlow, PyTorch, and Keras. Knowledge of object detection methods (YOLO, SSD, FRCNN, RCNN). Well-versed in semantic segmentation algorithms. Nice to Have: Experience with image transformation and stitching techniques. Other Details: This position is available for remote work and is expected to be a long-term engagement in the field of computer vision consulting, engaging with a diverse range of projects in various industries.

Posted 19 hours ago

Apply

6.0 - 8.0 years

22 - 23 Lacs

Pune, Maharashtra, India

On-site

Company Description Optimum Data Analytics is a strategic technology partner delivering reliable turn key AI solutions. Our streamlined approach to development ensures high-quality results and client satisfaction. We bring experience and clarity to organizations, powering every human decision with analytics & AI Our team consists of statisticians, computer science engineers, data scientists, and product managers. With expertise, flexibility, and cultural alignment, we understand the business, analytics, and data management imperatives of your organization. Our goal is to change how AI/ML is approached in the service sector and deliver outcomes that matter. We provide best-in-class services that increase profit for businesses and deliver improved value for customers, helping businesses grow, transform, and achieve their objectives. Job Details Position : ML Engineer Experience : 6-8 years Location : Pune/Indore office Work Mode : Onsite Notice Period : Immediate Joiner – 15 days Job Summary We are looking for highly motivated and experienced Machine Learning Engineers to join our advanced analytics and AI team. The ideal candidates will have strong proficiency in building, training, and deploying machine learning models at scale using modern ML tools and frameworks. Experience with LLMs (Large Language Models) such as OpenAI and Hugging Face Transformers is highly desirable. Key Responsibilities Design, develop, and deploy machine learning models for real-world applications. Implement and optimize end-to-end ML pipelines using PySpark and MLflow. Work with structured and unstructured data using Pandas, NumPy, and other data processing libraries. Train and fine-tune models using scikit-learn, TensorFlow, or PyTorch. Integrate and experiment with Large Language Models (LLMs) such as OpenAI GPT, Hugging Face Transformers, etc. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Monitor model performance and continuously improve model accuracy and reliability. Maintain proper versioning and reproducibility of ML experiments using MLflow. Required Skills Strong programming experience in Python. Solid understanding of machine learning algorithms, model development, and evaluation techniques. Experience with PySpark for large-scale data processing. Proficient with MLflow for tracking experiments and model lifecycle management. Hands-on experience with Pandas, NumPy, and Scikit-learn. Familiarity or hands-on experience with LLMs (e.g., OpenAI, Hugging Face Transformers). Understanding of MLOps principles and deployment best practices. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. Experience in cloud ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) is a plus. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Skills: panda,mlflow,large language models,python,mlops,pytorch,pandas,scikit-learn.,tensorflow,pyspark,scikit-learn,numpy,llms,mlfow,machine learning

Posted 19 hours ago

Apply

6.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 19 hours ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 19 hours ago

Apply

6.0 years

0 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 19 hours ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Title: Backend Developer Summary: We are looking for a talented and experienced backend developer to join our team. The ideal candidate will have a strong understanding of Python and related technologies, as well as experience with web frameworks such as Flask or Django. They will also have a working knowledge of both SQL and NoSQL databases. Responsibilities: · Design, develop, and maintain backend systems using Python · Work with front end developers and other developers to build and deploy scalable and reliable web applications · Troubleshoot and debug applications In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch and able to conceive and write basic level of algorithms · Implement security and data protection measures · Optimize application performance and scalability · Stay up-to-date on the latest Python technologies and trends Qualifications: · Bachelor's degree in Computer Science or a related field · 2+ years of experience in backend development using Python · Experience with web frameworks such as Flask or Django · Working knowledge of both SQL and NoSQL databases · Experience with cloud platforms such as AWS or Azure · Strong problem-solving and analytical skills · Excellent communication and collaboration skills Bonus Points: Experience with machine learning or artificial intelligence, Experience with DevOps practices, and Experience with open source software.

Posted 20 hours ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Surat, Gujarat

On-site

About Us: Founded in 2008, Red & White is Gujarat's leading NSDC & ISO-certified institute, focused on industry-relevant education and global employability. Role Overview: Hiring faculty to teach AI/ML & Data Science, guide projects, mentor students, and stay updated with tech trends. Key Responsibilities: Conduct lectures on AI, Machine Learning, and Data Science. Create and update course content and projects. Guide students on practical work and research. Mentor students in academics and career planning. Stay updated with latest trends in AI/ML and Data Science. Evaluate student performance and provide feedback. Contribute to curriculum development. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Masters/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Benefits: Cell phone reimbursement Flexible schedule Leave encashment Paid sick time Paid time off Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): Current Salary? Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML: 1 year (Required) Data science: 1 year (Required) Location: Surat, Gujarat (Required) Work Location: In person

Posted 20 hours ago

Apply

2.0 - 5.0 years

0 Lacs

Mohali district, India

Remote

Job Description: SDE-II – Python Developer Job Title SDE-II – Python Developer Department Operations Location In-Office Employment Type Full-Time Job Summary We are looking for an experienced Python Developer to join our dynamic development team. The ideal candidate will have 2 to 5 years of experience in building scalable backend applications and APIs using modern Python frameworks. This role requires a strong foundation in object-oriented programming, web technologies, and collaborative software development. You will work closely with the design, frontend, and DevOps teams to deliver robust and high-performance solutions. Key Responsibilities • Develop, test, and maintain backend applications using Django, Flask, or FastAPI. • Build RESTful APIs and integrate third-party services to enhance platform capabilities. • Utilize data handling libraries like Pandas and NumPy for efficient data processing. • Write clean, maintainable, and well-documented code that adheres to industry best practices. • Participate in code reviews and mentor junior developers. • Collaborate in Agile teams using Scrum or Kanban workflows. • Troubleshoot and debug production issues with a proactive and analytical approach. Required Qualifications • 2 to 5 years of experience in backend development with Python. • Proficiency in core and advanced Python concepts, including OOP and asynchronous programming. • Strong command over at least one Python framework (Django, Flask, or FastAPI). • Experience with data libraries like Pandas and NumPy. • Understanding of authentication/authorization mechanisms, middleware, and dependency injection. • Familiarity with version control systems like Git. • Comfortable working in Linux environments. Must-Have Skills • Expertise in backend Python development and web frameworks. • Strong debugging, problem-solving, and optimization skills. • Experience with API development and microservices architecture. • Deep understanding of software design principles and security best practices. Good-to-Have Skills • Experience with Generative AI frameworks (e.g., LangChain, Transformers, OpenAI APIs). • Exposure to Machine Learning libraries (e.g., Scikit-learn, TensorFlow, PyTorch). • Knowledge of containerization tools (Docker, Kubernetes). • Familiarity with web servers (e.g., Apache, Nginx) and deployment architectures. • Understanding of asynchronous programming and task queues (e.g., Celery, AsyncIO). • Familiarity with Agile practices and tools like Jira or Trello. • Exposure to CI/CD pipelines and cloud platforms (AWS, GCP, Azure). Company Overview We specialize in delivering cutting-edge solutions in custom software, web, and AI development. Our work culture is a unique blend of in-office and remote collaboration, prioritizing our employees above everything else. At our company, you’ll find an environment where continuous learning, leadership opportunities, and mutual respect thrive. We are proud to foster a culture where individuals are valued, encouraged to evolve, and supported in achieving their fullest potential. Benefits and Perks • Competitive Salary: Earn up to ₹6 –10 LPA based on skills and experience. • Generous Time Off: Benefit from 18 annual holidays to maintain a healthy work-life balance. • Continuous Learning: Access extensive learning opportunities while working on cutting-edge projects. • Client Exposure: Gain valuable experience in client-facing roles to enhance your professional growth.

Posted 21 hours ago

Apply

4.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Role & Responsibilities 4+ years of experience applying AI to practical uses Develop and train computer vision models for tasks like: Object detection and tracking (YOLO, Faster R-CNN, etc.) Image classification, segmentation, OCR (e.g., PaddleOCR, Tesseract) Face recognition/blurring, anomaly detection, etc. Optimize models for performance on edge devices (e.g., NVIDIA Jetson, OpenVINO, TensorRT). Process and annotate image/video datasets; apply data augmentation techniques. Proficiency in Large Language Models. Strong understanding of statistical analysis and machine learning algorithms. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Understanding of image processing concepts (thresholding, contour detection, transformations, etc.) Experience in model optimization, quantization, or deploying to edge (Jetson Nano/Xavier, Coral, etc.) Strong programming skills in Python (or C++), with expertise in: Implement and optimize machine learning pipelines and workflows for seamless integration into production systems. Hands-on experience with at least one real-time CV application (e.g., surveillance, retail analytics, industrial inspection, AR/VR). OpenCV, NumPy, PyTorch/TensorFlow Computer vision models like YOLOv5/v8, Mask R-CNN, DeepSORT Engage with multiple teams and contribute on key decisions. Expected to provide solutions to problems that apply across multiple teams. Lead the implementation of large language models in AI applications. Research and apply cutting-edge AI techniques to enhance system performance. Contribute to the development and deployment of AI solutions across various domains Requirements Design, develop, and deploy ML models for: OCR-based text extraction from scanned documents (PDFs, images) Table and line-item detection in invoices, receipts, and forms Named entity recognition (NER) and information classification Evaluate and integrate third-party OCR tools (e.g., Tesseract, Google Vision API, AWS Textract, Azure OCR,PaddleOCR, EasyOCR) Develop pre-processing and post-processing pipelines for noisy image/text data Familiarity with video analytics platforms (e.g., DeepStream, Streamlit-based dashboards). Experience with MLOps tools (MLflow, ONNX, Triton Inference Server). Background in academic CV research or published papers. Knowledge of GPU acceleration, CUDA, or hardware integration (cameras, sensors).

Posted 21 hours ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location- Gurgaon/ Bangalore We’re looking for a seasoned AI & CX Lead with 8–10 years of experience in designing and delivering AI-driven solutions across industries. This role requires a hands-on expert who can guide teams, drive strategy, and architect scalable systems using advanced AI/ML and GenAI models. Key Skills & Experience: 8–10 years of experience in AI/ML solution architecture Strong in Python, TensorFlow, PyTorch, Scikit-learn Deep knowledge of NLP, Computer Vision, Generative AI, LLMs Experience with cloud platforms (AWS, Azure, GCP) and MLOps pipelines Familiarity with Prompt Engineering, RAG , and Agentic AI frameworks Ability to lead technical teams and collaborate with stakeholders Strong communication, documentation, and solutioning skills Preferred: Experience in deploying AI models at scale Prior consulting/freelance experience Exposure to AI compliance, ethics, and responsible AI practices

Posted 21 hours ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Role Overview: We are seeking a highly motivated Senior Developer with 7+ years of experience in Full Stack Development to join our team and help build intelligent, scalable web applications. This role blends software engineering with cutting-edge AI/ML integration, offering an exciting opportunity to work on impactful projects in a fast-paced environment. Key Responsibilities: Design, develop, and maintain robust web applications using Python, Django, and Flask. Integrate AI/ML models into production environments, ensuring performance and scalability. Write high-quality, reusable code and implement best practices for development and deployment. Collaborate with data scientists and engineers to translate business requirements into technical solutions. Debug, test, and optimize applications for improved performance and reliability. Contribute to system architecture discussions and propose enhancements. Mentor junior developers (if applicable) and actively participate in code reviews. Stay abreast of advancements in AI, web frameworks, and related technologies. Required Qualifications: 7+ years of professional experience in Software Development. (with focus in Python preferred) Proven expertise in building applications with Django and Flask frameworks. Hands-on experience with AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn) and model deployment. Strong understanding of RESTful APIs, databases (e.g., PostgreSQL, MongoDB), and ORM tools. Proficiency with version control systems like Git. Ability to work independently and manage tasks with minimal supervision. Excellent problem-solving skills and attention to detail. Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience). Preferred Qualifications: Experience with containerization tools like Docker or Kubernetes. Familiarity with cloud platforms (e.g., AWS, GCP, Azure) for deploying applications. Knowledge of front-end technologies (e.g., HTML, CSS, JavaScript) is a plus. Contributions to open-source projects or a strong GitHub portfolio.

Posted 21 hours ago

Apply

0.0 - 1.0 years

0 - 0 Lacs

Surat, Gujarat

On-site

We are looking for a highly skilled Sr. Python Developer with strong expertise in Python frameworks and experience in AI/ML model development . The ideal candidate should be passionate about building scalable applications, working with data-driven systems, and implementing intelligent solutions. Design, develop, and maintain scalable Python applications. Work with Python frameworks (e.g., Django, Flask, FastAPI). Develop and integrate AI/ML models into production systems. Write clean, efficient, and reusable code following best practices. Collaborate with cross-functional teams (Data Scientists, DevOps, Frontend, etc.). Debug, optimize, and ensure high performance of applications. Stay updated with emerging technologies in AI/ML and Python ecosystem. Required Skills & Qualifications 3 – 5 years of professional experience in Python development. Strong knowledge of Python frameworks (Django, Flask, FastAPI). Experience with AI/ML algorithms, libraries (TensorFlow, PyTorch, Scikit-learn, etc.) . Hands-on experience with REST APIs, SQL/NoSQL databases . Familiarity with cloud platforms (AWS, GCP, or Azure) is a plus. Good understanding of software development best practices (Agile, Git, CI/CD). Strong problem-solving skills and ability to work independently. Note: Its full time work from Office profile and job location is Silver Trade Center, Utran Surat Job Type: Full-time Pay: ₹25,000.00 - ₹75,000.00 per month Location Type: In-person Schedule: Day shift Experience: Python: 3 years (Required) AI/ML: 1 year (Required) Location: Surat, Gujarat (Required) Work Location: In person

Posted 22 hours ago

Apply

2.0 - 4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: AI/ML Ops Engineer Location: Ahmedabad - Onsite Duration: 2-4 years experience (Ca ndidates below 2 year - PLEASE DO NOT APPLY) About the Role We are seeking an experienced AI/ML Ops Engineer to join our team and drive the development, deployment, and operationalization of machine learning and large language model (LLM) systems. You will be responsible for building scalable ML pipelines, enabling intelligent retrieval-augmented generation (RAG) capabilities, and deploying services that power intelligent enterprise applications. Key Responsibilities Develop and maintain machine learning models to forecast user behavior using structured time-series data. Build and optimize end-to-end regression pipelines using advanced libraries such as CatBoost , XGBoost , and LightGBM . Design and implement RAG (Retrieval-Augmented Generation) pipelines for enterprise chatbot systems utilizing tools like LangChain , LLM Router , or custom-built orchestrators. Work with vector databases for semantic document retrieval and reranking. Integrate external APIs into LLM workflows to enable tool/function calling capabilities. Package and deploy ML services using tools such as Docker , FastAPI , or Flask . Collaborate with cross-functional teams to ensure reliable CI/CD deployment and version control practices. Core Technologies & Tools Languages: Python (primary), Bash, SQL ML Libraries: scikit-learn, CatBoost, XGBoost, LightGBM, PyTorch, TensorFlow LLM & RAG Tools: LangChain, Hugging Face Transformers, LlamaIndex, LLM Router Vector Stores: FAISS, Weaviate, Chroma, Pinecone Deployment & APIs: Docker, FastAPI, Flask, Postman Infrastructure & Version Control: Git, GitHub, CI/CD pipeline Preferred Qualifications Proven experience in ML Ops, AI infrastructure, or productionizing ML models. Strong understanding of large-scale ML system design and deployment strategies. Experience working with vector databases and LLM-based applications in production.

Posted 22 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI/ML Agent Developer Location: All EXL Locations Department: Artificial Intelligence & Data Science Reports To: Director of AI Engineering / Head of Intelligent Automation Position Summary: We are seeking an experienced and innovative AI/ML Agent Developer to design, develop, and deploy intelligent agents within a multi-agent orchestration framework. This role involves building autonomous agents that leverage LLMs, reinforcement learning, prompt engineering, and decision-making strategies to perform complex data and workflow tasks. You’ll work closely with cross-functional teams to operationalize AI across diverse use cases such as annotation, data quality, knowledge graph construction, and enterprise automation. Key Responsibilities: Design and implement modular, reusable AI agents capable of autonomous decision-making using LLMs, APIs, and tools like LangChain, AutoGen, or Semantic Kernel. Engineer prompt strategies for task-specific agent workflows (e.g., document classification, summarization, labeling, sentiment detection). Integrate ML models (NLP, CV, RL) into agent behavior pipelines to support inference, learning, and feedback loops. Contribute to multi-agent orchestration logic including task delegation, tool selection, message passing, and memory/state management. Collaborate with MLOps, data engineering, and product teams to deploy agents at scale in production environments. Develop and maintain agent evaluations, unit tests, and automated quality checks for reliability and interpretability. Monitor and refine agent performance using logging, observability tools, and feedback signals. Required Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 3+ years of experience in developing AI/ML systems; 1+ year in agent-based architectures or LLM-enabled automation. Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn). Experience with LLM frameworks (LangChain, AutoGen, OpenAI, Anthropic, Hugging Face Transformers). Strong grasp of NLP, prompt engineering, reinforcement learning, and decision systems. Knowledge of cloud environments (AWS, Azure, GCP) and CI/CD for AI systems. Preferred Skills: Familiarity with multi-agent frameworks and agent orchestration design patterns. Experience in building autonomous AI applications for data governance, annotation, or knowledge extraction. Background in human-in-the-loop systems, active learning, or interactive AI workflows. Understanding of vector databases (e.g., FAISS, Pinecone) and semantic search. Why Join Us: Work at the forefront of AI orchestration and intelligent agents. Collaborate with a high-performing team driving innovation in enterprise AI platforms. Opportunity to shape the future of AI-based automation in real-world domains like healthcare, finance, and unstructured data.

Posted 22 hours ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 23 hours ago

Apply

0.0 - 3.0 years

0 - 0 Lacs

Mohali, Punjab

On-site

Job Description: Nogiz is hiring a passionate and skilled Python Developer (AI/ML) with 3+ years of experience to join our on-site team. If you're looking to work on impactful machine learning projects, collaborate with a motivated team, and grow in a technology-first environment, we’d love to hear from you. Responsibilities & Skills: Develop and deploy AI/ML models using Python and modern frameworks. Handle data preprocessing, feature engineering, and algorithm tuning. Work closely with cross-functional teams to integrate models into live systems. Optimize model performance and scalability. Write clean, maintainable code and clear documentation. Strong understanding of Python, OOPs concepts, and ML libraries (e.g., TensorFlow, PyTorch, scikit-learn, Pandas, NumPy). Experience in model evaluation and statistical analysis. Good communication skills and team collaboration. Exposure to Agile methodologies is a plus. Job Type: Full-time Pay: ₹50000 -₹80000 per month Schedule: Day shift Morning shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 3 years (Preferred) Location Mohali, Punjab (Preferred) Work Location: In person Job Types: Full-time, Permanent Pay: ₹50,000.00 - ₹75,000.00 per month Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python development: 3 years (Preferred) AI/ML: 3 years (Preferred) Location: Mohali, Punjab (Preferred) Work Location: In person

Posted 1 day ago

Apply

1.5 years

0 Lacs

India

Remote

Urgent Opening: Web Scraping – Data Crawling, AI/ML Location: Permanent Work From Home Job Type: Full-Time | Permanent Experience: 1.5+ Years (Preferred) About the Role We are looking for a skilled and experienced Python Developer with strong expertise in data crawling, web scraping, AI/ML, and CAPTCHA solving techniques. The ideal candidate is passionate about automation, data pipelines, and problem-solving with a deep understanding of the web ecosystem. This is a permanent remote opportunity, ideal for professionals looking to work in a flexible and innovative environment while delivering high-quality solutions in data acquisition and intelligent automation. Key Responsibilities Design and implement scalable data crawling/scraping solutions using Python. Develop tools to bypass or solve CAPTCHAs (e.g., reCAPTCHA, hCaptcha) using AI/ML or third-party APIs. Write efficient and robust data extraction and parsing logic for large-scale web data. Build and maintain AI/ML models for tasks such as image recognition, pattern detection, and anomaly detection. Optimize crawling infrastructure for speed, reliability, and anti-blocking strategies (rotating proxies, headless browsers, etc.). Integrate with APIs and databases to store, manage, and process scraped data. Monitor and troubleshoot scraping systems and adapt to changes in target websites. Collaborate with the team to define requirements, plan deliverables, and implement best practices. Required Skills & Qualifications 1.5+ years of hands-on experience with Python in web scraping/data crawling. Strong experience with Scrapy and Selenium. Deep understanding of CAPTCHA types and proven experience in solving or bypassing them. Proficient in AI/ML frameworks: TensorFlow, PyTorch, scikit-learn, or OpenCV. Experience with OCR tools (Tesseract, EasyOCR) and image pre-processing techniques. Familiarity with anti-bot techniques, headless browsers, and proxy rotation. Solid understanding of HTML, CSS, JavaScript, HTTP protocols, and website structure. Strong problem-solving skills and attention to detail. Perks & Benefits Permanent Work from Home Flexible work hours Competitive salary based on experience Opportunities for skill development and upskilling Performance-based incentives How to Apply Interested candidates can email their updated resume and portfolio (if any) to jyoti@transformez.in with the subject line: Python Developer – Data Crawling & AI.

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Data Science Intern (Paid) Company: WebBoost Solutions by UM Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with a Certificate of Internship About WebBoost Solutions by UM WebBoost Solutions by UM provides aspiring professionals with hands-on experience in data science , offering real-world projects to develop and refine their analytical and machine learning skills for a successful career. Responsibilities ✅ Collect, preprocess, and analyze large datasets. ✅ Develop predictive models and machine learning algorithms . ✅ Perform exploratory data analysis (EDA) to extract meaningful insights. ✅ Create data visualizations and dashboards for effective communication of findings. ✅ Collaborate with cross-functional teams to deliver data-driven solutions . Requirements 🎓 Enrolled in or graduate of a program in Data Science, Computer Science, Statistics, or a related field . 🐍 Proficiency in Python for data analysis and modeling. 🧠 Knowledge of machine learning libraries such as scikit-learn, TensorFlow, or PyTorch (preferred) . 📊 Familiarity with data visualization tools (Tableau, Power BI, or Matplotlib) . 🧐 Strong analytical and problem-solving skills. 🗣 Excellent communication and teamwork abilities. Stipend & Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based). ✔ Hands-on experience in data science projects . ✔ Certificate of Internship & Letter of Recommendation . ✔ Opportunity to build a strong portfolio of data science models and applications. ✔ Potential for full-time employment based on performance. How to Apply 📩 Submit your resume and a cover letter with the subject line "Data Science Intern Application." 📅 Deadline: 02nd August 2025 Equal Opportunity WebBoost Solutions by UM is committed to fostering an inclusive and diverse environment and encourages applications from all backgrounds. Let me know if you need any modifications! 🚀

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Job Title : Machine Learning Intern Company: Onetrueweb Software Solution Pvt Ltd. Location: Remote Duration: 3 months Opportunity: Full-time based on performance, with Certificate of Internship About Onetrueweb Software Solution Pvt Ltd. Onetrueweb Software Solution Pvt Ltd. provides students and graduates with hands-on learning opportunities and career growth in Machine Learning and Data Science. Role Overview As a Machine Learning Intern, you will work on real-world projects, enhancing your practical skills in data analysis and model development. Responsibilities ✅ Design, test, and optimize machine learning models. ✅ Analyze and preprocess datasets. ✅ Develop algorithms and predictive models. ✅ Use tools like TensorFlow, PyTorch, and Scikit-learn. ✅ Document findings and create reports. Requirements 🎓 Enrolled in or a graduate of a relevant program (Computer Science, AI, Data Science, or related field). 🧠 Knowledge of machine learning concepts and algorithms. 💻 Proficiency in Python or R (preferred). 🤝 Strong analytical and teamwork skills. Benefits 💰 Stipend: ₹7,500 - ₹15,000 (Performance-Based) (Paid). ✔ Hands-on machine learning experience. ✔ Internship Certificate & Letter of Recommendation. ✔ Real-world project contributions for your portfolio. How to Apply 📩 Submit your application with "Machine Learning Intern Application" as the subject. 📅 Deadline: 23rd July 2025 Note: Onetrueweb Software Solution Pvt Ltd . is an equal opportunity employer, welcoming diverse applicants.

Posted 1 day ago

Apply

0.0 - 3.0 years

30 - 35 Lacs

Hyderabad, Telangana

On-site

Job Title: Data Scientist / Machine Learning Specialist Location: Hyderabad (Hybrid Model) Experience: 3 to 5 Years Compensation: Up to ₹30 LPA Joining: Immediate or Short Notice Preferred About the Role: We are looking for a highly skilled and motivated Machine Learning Specialist / Data Scientist with a strong foundation in data science and a deep understanding of clinical supply chain or supply chain operations. This individual will play a critical role in developing predictive models, optimizing logistics, and enabling data-driven decision-making within our clinical trial supply chain ecosystem. Key Responsibilities: * Design, develop, and deploy machine learning models for demand forecasting, inventory optimization, and supply chain efficiency * Analyze clinical trial and logistics data to uncover insights and enable proactive planning * Collaborate with cross-functional teams including clinical operations, IT, and supply chain to integrate ML solutions into workflows * Build interactive dashboards and tools for real-time analytics and scenario modeling * Ensure models are scalable, maintainable, and compliant with regulatory frameworks (e.g., GxP, 21 CFR Part 11) * Stay up to date with the latest advancements in ML/AI and bring innovative solutions to complex clinical supply challenges Required Qualifications: * Master’s or Ph.D. in Computer Science, Data Science, Engineering, or a related field * 3–5 years of hands-on experience in machine learning, data science, or AI (preferably in healthcare or life sciences) * Proven experience with clinical or supply chain operations such as demand forecasting, IRT systems, and logistics planning * Proficiency in Python, R, SQL, and ML frameworks like scikit-learn, TensorFlow, or PyTorch * Solid knowledge of statistical modeling, time series forecasting, and optimization techniques * Strong analytical mindset and excellent communication skills * Ability to thrive in a fast-paced, cross-functional environment Preferred Qualifications: * Experience working with clinical trial systems and data (e.g., EDC, CTMS, IRT) * Understanding of regulatory requirements in clinical research * Familiarity with cloud platforms such as AWS, Azure, or GCP * Exposure to MLOps practices for model deployment and monitoring Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Data science: 3 years (Required) Machine learning: 3 years (Preferred) Python: 3 years (Required) PyTorch: 3 years (Required) Work Location: In person

Posted 1 day ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Role-AIML Engineer Location- Remote Expereince-8 to 12 years Notice-Immediate Only Interested candidated share your resume to sunilkumar@xpetize.com Job description: Seeking a highly experienced and technically adept AI/ML Engineer to spearhead a strategic initiative focused on analyzing annual changes in IRS-published TRCs and identifying their downstream impact on codebases. Role demands deep expertise in machine learning, knowledge graph construction, and software engineering processes. The ideal candidate will have a proven track record of delivering production-grade AI solutions in complex enterprise environments. Key Responsibilities: Design and development of an AI/ML-based system to detect and analyze differences in IRS TRC publications year-over-year. Implement knowledge graphs to model relationships between TRC changes and impacted code modules. Collaborate with tax domain experts, software engineers, and DevOps teams to ensure seamless integration of the solution into existing workflows. Define and enforce engineering best practices, including CI/CD, version control, testing, and model governance. Drive the end-to-end lifecycle of the solution—from data ingestion and model training to deployment and monitoring. Ensure scalability, performance, and reliability of the deployed system in a production environment. Mentor junior engineers and contribute to a culture of technical excellence and innovation. Required Skills & Experience: 8+ years of experience in software engineering, with at least 5 years in AI/ML solution delivery. Strong understanding of tax-related data structures, especially IRS TRCs, is a plus. Expertise in building and deploying machine learning models using Python, TensorFlow/PyTorch, and ML Ops frameworks. Hands-on experience with Knowledge graph technologies (e.g., Neo4j, RDF, SPARQL, GraphQL). Deep familiarity with software architecture, microservices, and API design. Experience with NLP techniques for document comparison and semantic analysis. Proven ability to lead cross-functional teams and deliver complex projects on time. Strong communication and stakeholder management skills.

Posted 1 day ago

Apply

2.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team as a Senior Data Scientist, reporting directly to the Lead Data Scientist in India. You will play a crucial role in building, optimizing, and maintaining AI-ready data infrastructure for advanced Generative AI applications. Your focus will be on hands-on implementation of cutting-edge data extraction, curation, and metadata enhancement techniques for both text and numerical data. You will be a key contributor to the development of innovative solutions, ensuring rapid iteration and deployment, and supporting the Lead in achieving the team's strategic goals. What Will You Be Doing AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Mentorship: Act as a technical mentor and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 2+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 1+ years Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Argus is where smart people belong and where they can grow. We answer the challenge of illuminating markets and shaping new futures. What We’re Looking For Join our Generative AI team to lead a new group in India, focused on creating and maintaining AI-ready data. As the point of contact in Mumbai, you will guide the local team and ensure seamless collaboration with our global counterparts. Your contributions will directly impact the development of innovative solutions used by industry leaders worldwide, supporting text and numerical data extraction, curation, and metadata enhancements to accelerate development and ensure rapid response times. You will play a pivotal role in transforming how our data are seamlessly integrated with AI systems, paving the way for the next generation of customer interactions. What Will You Be Doing Lead and Develop the Team: Oversee a team of data scientists in Mumbai. Mentoring and guiding junior team members, fostering their professional growth and development. Strategic Planning: Develop and implement strategic plans for data science projects, ensuring alignment with the company's goals and objectives. AI-Ready Data Development: Design, develop, and maintain high-quality AI-ready datasets, ensuring data integrity, usability, and scalability to support advanced Generative AI models. Advanced Data Processing: Drive hands-on efforts in complex data extraction, cleansing, and curation for diverse text and numerical datasets. Implement sophisticated metadata enrichment strategies to enhance data utility and accessibility for AI systems. Algorithm Implementation & Optimization: Implement and optimize state-of-the-art algorithms and pipelines for efficient data processing, feature engineering, and data transformation tailored for LLM and GenAI applications. GenAI Application Development: Apply and integrate frameworks like LangChain and Hugging Face Transformers to build modular, scalable, and robust Generative AI data pipelines and applications. Prompt Engineering Application: Apply advanced prompt engineering techniques to optimize LLM performance for specific data extraction, summarization, and generation tasks, working closely with the Lead's guidance. LLM Evaluation Support: Contribute to the systematic evaluation of Large Language Models (LLMs) outputs, analysing quality, relevance, and accuracy, and supporting the implementation of LLM-as-a-judge frameworks. Retrieval-Augmented Generation (RAG) Contribution: Actively contribute to the implementation and optimization of RAG systems, including working with embedding models, vector databases, and, where applicable, knowledge graphs, to enhance data retrieval for GenAI. Technical Leadership: Act as a technical leader and subject matter expert for junior data scientists, providing guidance on best practices in coding and PR reviews, data handling, and GenAI methodologies. Cross-Functional Collaboration: Collaborate effectively with global data science teams, engineering, and product stakeholders to integrate data solutions and ensure alignment with broader company objectives. Operational Excellence: Troubleshoot and resolve data-related issues promptly to minimize potential disruptions, ensuring high operational efficiency and responsiveness. Documentation & Code Quality: Produce clean, well-documented, production-grade code, adhering to best practices for version control and software engineering. Skills And Experience Leadership Experience: Proven track record in leading and mentoring data science teams, with a focus on strategic planning and operational excellence. Academic Background: Advanced degree in AI, statistics, mathematics, computer science, or a related field. Programming and Frameworks: 5+ years of hands-on experience with Python, TensorFlow or PyTorch, and NLP libraries such as spaCy and Hugging Face. GenAI Tools: 2+ years of Practical experience with LangChain, Hugging Face Transformers, and embedding models for building GenAI applications. Prompt Engineering: Deep expertise in prompt engineering, including prompt tuning, chaining, and optimization techniques. LLM Evaluation: Experience evaluating LLM outputs, including using LLM-as-a-judge methodologies to assess quality and alignment. RAG and Knowledge Graphs: Practical understanding and experience using vector databases. In addition, familiarity with graph-based RAG architectures and the use of knowledge graphs to enhance retrieval and reasoning would be a strong plus. Cloud: 2+ years of experience with Gemini/OpenAI models and cloud platforms such as AWS, Google Cloud, or Azure. Proficient with Docker for containerization. Data Engineering: Strong understanding of data extraction, curation, metadata enrichment, and AI-ready dataset creation. Collaboration and Communication: Excellent communication skills and a collaborative mindset, with experience working across global teams. What’s In It For You Our rapidly growing, award-winning business offers a dynamic environment for talented, entrepreneurial professionals to achieve results and grow their careers. Argus recognizes and rewards successful performance and as an Investor in People, we promote professional development and retain a high-performing team committed to building our success. Competitive salary Hybrid Working Policy (3 days in Mumbai office/ 2 days WFH once fully inducted) Group healthcare scheme 18 days annual leave 8 days of casual leave Extensive internal and external training Hours This is a full-time position operating under a hybrid model, with three days in the office and up to two days working remotely. The team supports Argus’ key business processes every day, as such you will be required to work on a shift-based rota with other members of the team supporting the business until 8pm. Typically support hours run from 11am to 8pm with each member of the team participating up to 2/3 times a week. Argus is the leading independent provider of market intelligence to the global energy and commodity markets. We offer essential price assessments, news, analytics, consulting services, data science tools and industry conferences to illuminate complex and opaque commodity markets. Headquartered in London with 1,500 staff, Argus is an independent media organisation with 30 offices in the world’s principal commodity trading hubs. Companies, trading firms and governments in 160 countries around the world trust Argus data to make decisions, analyse situations, manage risk, facilitate trading and for long-term planning. Argus prices are used as trusted benchmarks around the world for pricing transportation, commodities and energy. Founded in 1970, Argus remains a privately held UK-registered company owned by employee shareholders and global growth equity firm General Atlantic.

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Caprae Capital Partners is an innovative private equity firm led by the principal Kevin Hong who has been a serial tech entrepreneur, and who grew two startups to $31M ARR and $7M in revenue. The fund originated with two additional tech entrepreneur friends of Kevin who have had ~8 figure and ~9 figure exits to Twitter and Square, respectively. Additional partners include an Ex-Nasa software engineer and an Ex-Chief of Staff from Google. Caprae Capital in conjunction with its portfolio company launched AI-RaaS (AI Readiness as a Service) and is looking for teammates to join for the long haul If you have a passion for disrupting the finance industry and happen to be a mission-driven person, this is a great fit for you. Additionally, given the recent expansion of this particular firm, you will have the opportunity to work from the ground level and take on a leadership role for the internship program which would result in a paid role. Lastly, this is also a great role for those who are looking into strategy and consulting roles in the future as it will give you the exposure and experience necessary to develop strong business acumen. Role Overview We are looking for a Lead Full Stack Developer to architect and lead the development of new features for SaaSquatchLeads.com, an AI-driven lead generation and sales intelligence platform. You will own technical direction, guide other engineers, and ensure our stack is scalable, maintainable, and optimized for AI-powered workloads. Key Responsibilities Lead architectural design and technical strategy for SaaSquatchLeads.com. Develop, deploy, and maintain end-to-end features spanning frontend, backend, and AI integrations. Implement and optimize AI-driven services for lead scoring, personalization, and predictive analytics. Build and maintain data pipelines for ingesting, processing, and analyzing large datasets. Mentor and guide a distributed engineering team, setting best coding practices . Collaborate with product, design, and data science teams to align technical execution with business goals. Ensure security, performance, and scalability of the platform. Required Skills & Technologies Frontend: React, JavaScript (ES6+), TypeScript, Redux/Zustand, HTML, CSS, TailwindCSS. Backend: Python (Flask, FastAPI, Django), Node.js (bonus). AI & Data Science: Python, PyTorch, Hugging Face, OpenAI APIs, LangChain, Pandas, NumPy. Databases: PostgreSQL, MySQL, MongoDB, Redis. DevOps & Infrastructure: Docker, Kubernetes, AWS (Lambda, S3, RDS, EC2), CI/CD pipelines. Data Processing: ETL tools, message queues (Kafka, RabbitMQ). Search & Indexing: Elasticsearch, Meilisearch (for fast lead lookups).

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Senior Applied Scientist Bangalore, Karnataka, India Date posted Aug 01, 2025 Job number 1854651 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Applied Sciences Employment type Full-Time Overview Do you want to be part of a team which delivers innovative products and machine learning solutions across Microsoft to hundreds of millions of users every month? Microsoft Turing team is an innovative engineering and applied research team working on state-of-the-art deep learning models, large language models and pioneering conversational search experiences. The team spearheads the platform and innovation for conversational search and the core copilot experiences across Microsoft’s ecosystem including BizChat, Office and Windows. As a Senior Applied Scientist in the Turing team, you will be involved in tight timeline-based hands on data science activity and work, including training models, creating evaluation sets, building infrastructure for training and evaluation, and more. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 3+ years related experience (e.g., statistics, predictive analytics, research) OR Doctorate in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 1+ year(s) related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. 3+ years of industrial experience coding in C++, C#, C, Java or Python. Prior experience with data analysis or understanding, looking at data from a large scale systems to identify patterns or create evaluation datasets. Familiarity with common machine learning, deep learning frameworks and concepts, using use of LLMs, prompting. Experience in pytorch or tensorflow is a bonus. Ability to communicate technical details clearly across organizational boundaries. Other Requirements: Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings: Microsoft Cloud Background Check : This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred Qualifications: Solid ability and effectiveness working end-to-end in a challenging technical problem domain (plan, design, execution, continuous release, and service operation). Some prior experience in applying deep learning techniques and drive end-to-end AI product development (Search, Recommendation, NLP, Document Understanding, etc). Prior experience with Azure or any other cloud pipelines or execution graphs. Self-driven, results oriented, high integrity, ability to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Customer/End-result/Metrics driven in design and development. Keen ability and motivation to learn, enter new domains, and manage through ambiguity. Solid publication track records at top conferences like ACL, EMNLP, SIGKDD, AAAI, WSDM, COLING, WWW, NIPS, ICASSP, etc. #M365Core Responsibilities As an Applied Scientist on our team, you'll be responsible for and will engage in: Driving projects from design through implementation, experimentation and finally shipping to our users. This requires deep dive into data to identify gaps, come up with heuristics and possible solutions, using LLMs to create the right model or evaluation prompts, and setup the engineering pipeline or infrastructure to run them. Come up with evaluation techniques, datasets, criteria and metrics for model evaluations. These are often SOTA models or metrics / datasets. Hands on own the fine-tuning, use of language models, including dataset creation, filtering, review, and continuous iteration. This requires working in a diverse geographically distributed team environment where collaboration and innovation are valued. You will have an opportunity for direct impact on design, functionality, security, performance, scalability, manageability, and supportability of Microsoft products that use our deep learning technology. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies