Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Mohali
Remote
AI/ML Engineer Required exp: 3–5+ Years Experience Location: Mohali, Punjab | Domain: AI/ML | Mode: WFO / WFH | Immediate Joiners ChicMic Studios is looking for a passionate AI/ML Engineer with hands-on experience in Python, Django/Flask, AWS, and modern machine learning frameworks like PyTorch, TensorFlow, and transformers. If building scalable AI applications and deploying real-world ML solutions excite you — we’d love to talk! Key Responsibilities: - Build scalable web apps using Django & Flask - Design APIs using Django Rest Framework (DRF) - Deploy apps on AWS (EC2, S3, Lambda, RDS, etc.) - Integrate and serve AI/ML models (NLP, vision, transformers) - Use TensorFlow, PyTorch, scikit-learn for model development - Work with transformer models like BERT/GPT for NLP tasks - Optimize models (quantization, pruning, hyperparameter tuning) - Deploy using TorchServe, TensorFlow Serving, AWS SageMaker - Collaborate across teams and ensure app & model scalability Required Skills: - 3+ years of professional Python development experience - Strong knowledge of Django, Flask, DRF - Experience with AWS services for deployment - Deep understanding of Agentic AI, ML concepts and hands-on with PyTorch/TensorFlow - Worked with SQL/NoSQL databases (PostgreSQL, MongoDB) - Exposure to MLOps practices is a big plus - Bonus: Familiar with front-end basics (HTML, CSS, JS) - Strong team player with problem-solving and communication skills Why Join Us: - 5 Days Working - Great Learning Exposure - EPF Benefit - Earned Leaves - High Employee Retention - Work Across 16+ Technologies Let’s Connect! Email: disha.mehta755@chicmicstudios.in Contact: +91 98759 52834 Visit us: https://www.chicmicstudios.in/ Come be part of a team that's redefining innovation through AI. Whether you're an expert or on your way there — we support growth, ownership, and real impact. Job Type: Full-time Pay: ₹600,000.00 - ₹2,000,000.00 per month Benefits: Leave encashment Provident Fund Work from home Application Question(s): Must have exposure of working with Agentic AI Education: Bachelor's (Required) Experience: Python: 3 years (Required) AI/ML: 2 years (Required) Language: English (Required) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams
Posted 1 week ago
0.0 - 2.0 years
6 - 20 Lacs
Mohali, Punjab
Remote
AI/ML Engineer Required exp: 3–5+ Years Experience Location: Mohali, Punjab | Domain: AI/ML | Mode: WFO / WFH | Immediate Joiners ChicMic Studios is looking for a passionate AI/ML Engineer with hands-on experience in Python, Django/Flask, AWS, and modern machine learning frameworks like PyTorch, TensorFlow, and transformers. If building scalable AI applications and deploying real-world ML solutions excite you — we’d love to talk! Key Responsibilities: - Build scalable web apps using Django & Flask - Design APIs using Django Rest Framework (DRF) - Deploy apps on AWS (EC2, S3, Lambda, RDS, etc.) - Integrate and serve AI/ML models (NLP, vision, transformers) - Use TensorFlow, PyTorch, scikit-learn for model development - Work with transformer models like BERT/GPT for NLP tasks - Optimize models (quantization, pruning, hyperparameter tuning) - Deploy using TorchServe, TensorFlow Serving, AWS SageMaker - Collaborate across teams and ensure app & model scalability Required Skills: - 3+ years of professional Python development experience - Strong knowledge of Django, Flask, DRF - Experience with AWS services for deployment - Deep understanding of Agentic AI, ML concepts and hands-on with PyTorch/TensorFlow - Worked with SQL/NoSQL databases (PostgreSQL, MongoDB) - Exposure to MLOps practices is a big plus - Bonus: Familiar with front-end basics (HTML, CSS, JS) - Strong team player with problem-solving and communication skills Why Join Us: - 5 Days Working - Great Learning Exposure - EPF Benefit - Earned Leaves - High Employee Retention - Work Across 16+ Technologies Let’s Connect! Email: disha.mehta755@chicmicstudios.in Contact: +91 98759 52834 Visit us: https://www.chicmicstudios.in/ Come be part of a team that's redefining innovation through AI. Whether you're an expert or on your way there — we support growth, ownership, and real impact. Job Type: Full-time Pay: ₹600,000.00 - ₹2,000,000.00 per month Benefits: Leave encashment Provident Fund Work from home Application Question(s): Must have exposure of working with Agentic AI Education: Bachelor's (Required) Experience: Python: 3 years (Required) AI/ML: 2 years (Required) Language: English (Required) Work Location: In person
Posted 1 week ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Overview We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and Skills & Qualifications : Bachelors or Masters in Computer Science, AI, Machine Learning, or related field. 4+ years of hands-on experience in AI/ML solution development. Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT. Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG). Proficient in key AI libraries and frameworks : LLMs & GenAI : Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers NLP : SpaCy, NLTK. Vision : OpenCV, MMDetection, YOLOv5/v8, Detectron2 MLOps : MLflow, FastAPI, Docker, Git Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation. Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure. Strong communication skills and ability to convert business problems into technical Qualifications : Experience building multimodal systems (text + image, etc.) Practical experience with agent frameworks for autonomous or goal-directed AI. Familiarity with quantization, distillation, or knowledge transfer for efficient model Responsibilities : Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications. Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality. Build NLP solutions for Q&A, summarization, information extraction, text classification, and more. Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks. Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines. Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features. Optimize models for performance, cost-efficiency, and low latency in production. Continuously evaluate new AI research, tools, and frameworks and apply them where relevant. Mentor junior AI engineers and contribute to internal AI best practices and documentation. (ref:hirist.tech)
Posted 1 week ago
10.0 years
2 - 11 Lacs
Bengaluru
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
4.0 years
4 - 7 Lacs
Bengaluru
On-site
Change the world. Love your job. Your career starts here! This is an exciting opportunity to design and develop innovative software solutions that drive TI's revolutionary product lines. We change lives by working on the technologies that people use every day. Are you ready for the challenge? As a Software Engineer, you'll become a key contributor, where your skills and input make a big difference. In this role, you'll design embedded software and development tools that will be used to test products. You'll write code that tells chips how to operate in revolutionary new ways. And, you'll work closely with business partners and customers, as well as TI's marketing, systems and applications engineering teams, to collaborate and solve business problems. QUALIFICATIONS Minimum Requirements: 4-8 years of relative experience Degree in Electrical Engineering, Computer Engineering, Computer Science, Electrical and Computer Engineering, or related field Strong Embedded firmware skills and experience Strong Assembly, C and C++ programming skills Preferred Qualifications: Knowledge of software engineering processes and the full software development lifecycle Demonstrated strong analytical and problem solving skills Strong verbal and written communication skills Ability to work in teams and collaborate effectively with people in different functions Strong time management skills that enable on-time project delivery Demonstrated ability to build strong, influential relationships Ability to work effectively in a fast-paced and rapidly changing environment Ability to take the initiative and drive for results Great programmer: Programming skills in C/C++ and python, Modular and Object Oriented programming skills, familiarity with build systems – make, cmake, familiarity with Linux In-depth knowledge of embedded systems – VLIW and SIMD processor architecture, DMA, cache, memory architecture, inter process communication Working experience in machine learning technologies such as CNN, transformers, quantization algorithms and approaches for camera-based applications on embedded systems Working experience with DSPs (preferably TI DSPs) and hardware development boards/EVM for image/vision-based processing algorithms Good knowledge on machine learning frameworks (PyTorch), inference solution and exchange formats (ONNX, ONNX RunTime, protobufs) Basic knowledge of RTOS and Linux with exposure to debugging of embedded systems - familiarity with heterogeneous core architecture is added advantage Well verse with software development life cycle and efficient use of associated tools – Git, JIRA, bitbucket, Jenkins, containers (Dockers), CI/CD Strong Communication, documentation and writing skills ABOUT US Why TI? Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics. We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us. About Texas Instruments Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com. Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment. If you are interested in this position, please apply to this requisition. TI does not make recruiting or hiring decisions based on citizenship, immigration status or national origin. However, if TI determines that information access or export control restrictions based upon applicable laws and regulations would prohibit you from working in this position without first obtaining an export license, TI expressly reserves the right not to seek such a license for you and either offer you a different position that does not require an export license or decline to move forward with your employment. JOB INFO Job Identification 25001868 Job Category Engineering - Product Dev Posting Date 07/20/2025, 04:49 AM Degree Level Bachelor's Degree Locations BAN4 2,3rd and 4th Floors, Bangalore, 560093, IN ECL/GTC Required Yes
Posted 1 week ago
10.0 years
2 - 10 Lacs
Calcutta
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
14.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Gen AI Technical Lead/ Architect || Agentic AI || Immediate Joiners Only from Delhi NCR Experience: 8–14 Years Location: Noida/ Faridabad Work Mode: Full-time, Hybrid About the Role We are seeking an exceptional AI Architect with deep expertise in Agentic AI and Generative AI to lead the design, development, and deployment of next-generation autonomous AI systems. This role is pivotal in building LLM-powered agents , integrating memory, tool use, planning, and reasoning capabilities to create intelligent, goal-driven systems. As a technical leader, you will drive end-to-end AI initiatives—from cutting-edge research and architecture design to scalable deployment in cloud environments. Key Responsibilities Agentic AI Design & Implementation Architect and build LLM-based agents capable of autonomous task execution, memory management, tool usage, and multi-step reasoning. Develop modular, goal-oriented agentic systems using tools like LangChain , Auto-GPT , CrewAI , SuperAGI , and OpenAI Function Calling . Design multi-agent ecosystems with collaboration, negotiation, and task delegation capabilities. Integrate long-term and short-term memory (e.g., vector databases, episodic memory) into agents. Generative AI & Model Optimization Develop, fine-tune, and optimize foundation models (LLMs, diffusion models) using TensorFlow , PyTorch , or JAX . Apply model compression, quantization, pruning, and distillation techniques for deployment efficiency. Leverage cloud AI services such as AWS SageMaker , Azure ML , Google Vertex AI for scalable model training and serving. AI Research & Innovation Lead research in Agentic AI, LLM orchestration, and advanced planning strategies (e.g., ReAct, Tree of Thought, Reflexion, Autoformalism). Stay current with SOTA research; contribute to whitepapers, blogs, or top-tier conferences (e.g., NeurIPS, ICML, ICLR). Evaluate new architectures like BDI models , cognitive architectures (e.g., ACT-R, Soar), or neuro-symbolic approaches. Programming & Systems Engineering Strong coding proficiency in Python , CUDA , and TensorRT for model acceleration. Experience with distributed computing frameworks (e.g., Ray , Dask , Apache Spark ) for training large-scale models. Design and implement robust MLOps pipelines with Docker , Kubernetes , MLflow , and CI/CD systems. Required Skills & Qualifications 8–14 years of experience in AI/ML, with at least 2+ years hands-on with Agentic AI systems. Proven experience building, scaling, and deploying agent-based architectures. Strong theoretical foundation in machine learning, deep learning, NLP, and reinforcement learning. Familiarity with cognitive architectures, decision-making, and planning systems. Hands-on with LLM integration and fine-tuning (e.g., OpenAI GPT-4, Claude, LLaMA, Mistral, Gemini). Deep understanding of prompt engineering , function/tool calling , retrieval-augmented generation (RAG) , and memory management in agentic systems. Preferred (Nice to Have) Publications or open-source contributions in Agentic AI or Generative AI. Experience with simulation environments for training/test agents (e.g., OpenAI Gym, Unity ML-Agents). Knowledge of safety, ethics, and alignment in autonomous AI systems. About Damco: We are a global technology company with more than two decades of core IT experience. Our differentiators are technological prowess with unwavering back-end support on a wide range of technologies and industry-leading platforms. At Damco, we take pride in building innovative, efficient, and robust IT solutions for our clients. We match the client’s business goals with our technology expertise and immaculate execution capabilities to solve issues that matter to the end-user. Damco has developed hundreds of products and applications, redefined countless processes, built numerous technology teams and systems, and delivered significant financial results to customers from diverse verticals. We believe in empowering our people to perform and grow by offering opportunities, learning, and inspiration—to ‘act and accomplish’. If you are a self-starter looking for an open and collaborative work culture to excel in career, we are the place for you. Here is what you can expect from our work-culture.
Posted 1 week ago
0.0 years
0 Lacs
Kolkata, West Bengal
On-site
Kolkata,West Bengal,India +1 more Job ID 770160 Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Change the world. Love your job. Your career starts here! This is an exciting opportunity to design and develop innovative software solutions that drive TI's revolutionary product lines. We change lives by working on the technologies that people use every day. Are you ready for the challenge? As a Software Engineer, you'll become a key contributor, where your skills and input make a big difference. In this role, you'll design embedded software and development tools that will be used to test products. You'll write code that tells chips how to operate in revolutionary new ways. And, you'll work closely with business partners and customers, as well as TI's marketing, systems and applications engineering teams, to collaborate and solve business problems. Qualifications Minimum Requirements: 4-8 years of relative experience Degree in Electrical Engineering, Computer Engineering, Computer Science, Electrical and Computer Engineering, or related field Strong Embedded firmware skills and experience Strong Assembly, C and C++ programming skills Preferred Qualifications Knowledge of software engineering processes and the full software development lifecycle Demonstrated strong analytical and problem solving skills Strong verbal and written communication skills Ability to work in teams and collaborate effectively with people in different functions Strong time management skills that enable on-time project delivery Demonstrated ability to build strong, influential relationships Ability to work effectively in a fast-paced and rapidly changing environment Ability to take the initiative and drive for results Great programmer: Programming skills in C/C++ and python, Modular and Object Oriented programming skills, familiarity with build systems – make, cmake, familiarity with Linux In-depth knowledge of embedded systems – VLIW and SIMD processor architecture, DMA, cache, memory architecture, inter process communication Working experience in machine learning technologies such as CNN, transformers, quantization algorithms and approaches for camera-based applications on embedded systems Working experience with DSPs (preferably TI DSPs) and hardware development boards/EVM for image/vision-based processing algorithms Good knowledge on machine learning frameworks (PyTorch), inference solution and exchange formats (ONNX, ONNX RunTime, protobufs) Basic knowledge of RTOS and Linux with exposure to debugging of embedded systems - familiarity with heterogeneous core architecture is added advantage Well verse with software development life cycle and efficient use of associated tools – Git, JIRA, bitbucket, Jenkins, containers (Dockers), CI/CD Strong Communication, documentation and writing skills About Us Why TI? Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics. We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us. About Texas Instruments Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com . Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment. If you are interested in this position, please apply to this requisition. About The Team TI does not make recruiting or hiring decisions based on citizenship, immigration status or national origin. However, if TI determines that information access or export control restrictions based upon applicable laws and regulations would prohibit you from working in this position without first obtaining an export license, TI expressly reserves the right not to seek such a license for you and either offer you a different position that does not require an export license or decline to move forward with your employment.
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Work On 1. Deep Learning & Computer Vision Train models for image classification: binary/multi-class using CNNs, EfficientNet, or custom backbones. Implement object detection using YOLOv5, Faster R-CNN, SSD; tune NMS and anchor boxes for medical contexts. Work with semantic segmentation models (UNet, DeepLabV3+) for region-level diagnostics (e.g., cell, lesion, or nucleus boundaries). Apply instance segmentation (e.g., Mask R-CNN) for microscopy image cell separation. Use super-resolution and denoising networks (SRCNN, Real-ESRGAN) to enhance low-quality inputs. Develop temporal comparison pipelines for changes across image sequences (e.g., disease progression). Leverage data augmentation libraries (Albumentations, imgaug) for low-data domains. 2. Vision-Language Models (VLMs) Fine-tune CLIP, BLIP, LLaVA, GPT-4V to generate explanations, labels, or descriptions from images. Build image captioning models (Show-Attend-Tell, Transformer-based) using paired datasets. Train or use VQA pipelines for image-question-answer triples. Align text and image embeddings with contrastive loss (InfoNCE), cosine similarity, or projection heads. Design prompt-based pipelines for zero-shot visual understanding. Evaluate using metrics like BLEU, CIDEr, SPICE, Recall@K, etc. 3. Model Training, Evaluation & Interpretation Use PyTorch (core), with support from HuggingFace, torchvision, timm, Lightning. Track model performance with TensorBoard, Weights & Biases, MLflow. Implement cross-validation, early stopping, LR schedulers, warm restarts. Visualize model internals using GradCAM, SHAP, Attention rollout, etc. Evaluate metrics: • Classification: Accuracy, ROC-AUC, F1 • Segmentation: IoU, Dice Coefficient • Detection: mAP • Captioning/VQA: BLEU, METEOR 4. Optimization & Deployment Convert models to ONNX, TorchScript, or TFLite for portable inference. Apply quantization-aware training, post-training quantization, and pruning. Optimize for low-power inference using TensorRT or OpenVINO. Build multi-threaded or asynchronous pipelines for batched inference. 5. Edge & Real-Time Systems Deploy models on Jetson Nano/Xavier, Coral TPU. Handle real-time camera inputs using OpenCV, GStreamer and apply streaming inference. Handle multiple camera/image feeds for simultaneous diagnostics. 6. Regulatory-Ready AI Development Maintain model lineage, performance logs, and validation trails for 21 CFR Part 11 and ISO 13485 readiness. Contribute to validation reports, IQ/OQ/PQ, and reproducibility documentation. Write SOPs and datasheets to support clinical validation of AI components. 7. DevOps, CI/CD & MLOps Use Azure Boards + DevOps Pipelines (YAML) to: Track sprints • Assign tasks • Maintain epics & user stories • Trigger auto-validation pipelines (lint, unit tests, inference validation) on code push • Integrate MLflow or custom logs for model lifecycle tracking. • Use GitHub Actions for cross-platform model validation across environments. 8. Bonus Skills (Preferred but Not Mandatory) Experience in microscopy or pathology data (TIFF, NDPI, DICOM formats). Knowledge of OCR + CV hybrid pipelines for slide/dataset annotation. Experience with streamlit, Gradio, or Flask for AI UX prototyping. Understanding of active learning or semi-supervised learning in low-label settings. Exposure to research publishing, IP filing, or open-source contributions. 9. Required Background 4–6 years in applied deep learning (post academia) Strong foundation in: Python + PyTorch CV workflows (classification, detection, segmentation) Transformer architectures & attention VLMs or multimodal learning Bachelor’s or Master’s degree in CS, AI, EE, Biomedical Engg, or related field 10. How to Apply Send the following to info@sciverse.co.in Subject: Application – AI Research Engineer (4–8 Yrs, CV + VLM) Include: • Your updated CV • GitHub / Portfolio • Short write-up on a model or pipeline you built and why you’re proud of it OR apply directly via LinkedIn — but email applications get faster visibility. Let’s build AI that sees, understands, and impacts lives.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
As a Python Machine Learning & AI Developer at Chicmic Studios, you will be an integral part of our dynamic team, bringing your expertise and experience to develop cutting-edge web applications using Django and Flask frameworks. Your primary responsibilities will include designing and implementing RESTful APIs with Django Rest Framework (DRF), deploying and optimizing applications on AWS services, and integrating AI/ML models into existing systems. You will be expected to create scalable machine learning models using PyTorch, TensorFlow, and scikit-learn, implement transformer architectures like BERT and GPT for NLP and advanced AI use cases, and optimize models through techniques such as hyperparameter tuning, pruning, and quantization. Additionally, you will deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker, ensuring the scalability, performance, and reliability of both applications and models. Collaboration with cross-functional teams to analyze requirements, deliver technical solutions, and staying up-to-date with the latest industry trends in AI/ML will also be key aspects of your role. Your ability to write clean, efficient code following best practices, conduct code reviews, and provide constructive feedback to peers will contribute to the success of our projects. To be successful in this role, you should possess a Bachelor's degree in Computer Science, Engineering, or a related field, with at least 3 years of professional experience as a Python Developer. Proficiency in Python, Django, Flask, and AWS services is required, along with expertise in machine learning frameworks, transformer architectures, and database technologies. Familiarity with MLOps practices, front-end technologies, and strong problem-solving skills are also desirable qualities for this position. If you are passionate about leveraging your Python development skills and AI expertise to drive innovation and deliver impactful solutions, we encourage you to apply and be a part of our innovative team at Chicmic Studios.,
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI/ML Engineer NLP Focus Location : Noida / Gurugram Type : Full-time Hiring for Our Client About The Role We are looking for a talented and experienced AI/NLP Engineer to join our clients innovative and fast-paced team. The ideal candidate is passionate about cutting-edge AI applications and has a proven track record in Natural Language Processing (NLP), Machine Learning (ML), and Generative AI. Youll play a critical role in designing, fine-tuning, deploying, and optimizing AI models for real-world applications. If you enjoy solving complex business problems using scalable AI solutions and collaborating with cross-functional teams, wed love to hear from you. Key Responsibilities Model Development & Fine-tuning : Customize, fine-tune, and optimize state-of-the-art NLP models (e.g., Transformer-based, Conformer-based, VITS). Train models from scratch for business-specific challenges. Generative AI & Prompt Engineering : Design effective prompts tailored to user and business needs. Build similarity models (e.g., BERT, XLNet, cross-encoders) for LLM-based Retrieval-Augmented Generation (RAG). Model Deployment & Optimization : Deploy models at scale using Docker, Kubernetes, and AWS. Optimize performance using vLLM (Paged Attention), quantization, and adapter-based tuning (LoRA, QLoRA). Performance Monitoring & Pipeline Development : Evaluate and enhance model performance (accuracy, latency, scalability). Build robust ML pipelines using tools like Airflow and AWS Batch. Cross-functional Collaboration : Work closely with data scientists, engineers, and product teams to integrate NLP models into production systems. Required Skills & Experience Experience : 6+ years in NLP/ML with deep expertise in fine-tuning and transfer learning. NLP Specialization : ASR, TTS, or custom LLM architectures (e.g., auto-regressive transformers, RWKV). Generative AI : Skilled in designing prompts and leveraging LLMs for business use cases. Programming : Strong in Python; experience in model implementation and pipeline building. Deployment : Hands-on with Docker, Kubernetes, and cloud platforms (AWS preferred). Frameworks & Tools : TensorFlow, PyTorch, Hugging Face, NLTK, SpaCy, Gensim. Bonus Skills (Nice To Have) Cloud & MLOps : AWS Lambda, Step Functions, Batch, GCP, Azure, or Nemo. Scalability : Experience in managing and scaling large AI deployments in production. Preferred Qualifications Bachelor's or Masters in Computer Science, Data Science, Engineering, or a related field. Experience working with large datasets and building robust preprocessing pipelines. Strong grasp of neural networks and advanced deep learning architectures for NLP. (ref:hirist.tech)
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Company Description Quantanite is a business process outsourcing (BPO) and customer experience (CX) solutions company that helps fast-growing companies and leading global brands to transform and grow. We do this through a collaborative and consultative approach, rethinking business processes and ensuring our clients employ the optimal mix of automation and human intelligence. We’re an ambitious team of professionals spread across four continents and looking to disrupt our industry by delivering seamless customer experiences for our clients, backed up with exceptional results. We have big dreams and are constantly looking for new colleagues to join us who share our values, passion, and appreciation for diversity Job Description About the Role We are seeking a highly skilled Senior AI Engineer with deep expertise in Agentic frameworks, Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, MLOps/LLMOps, and end-to-end GenAI application development. In this role, you will design, develop, fine-tune, deploy, and optimize state-of-the-art AI solutions across diverse enterprise use cases including AI Copilots, Summarization, Enterprise Search, and Intelligent Tool Orchestration. Key Responsibilities Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Qualifications 5+ years of experience in AI/ML roles, focusing on LLM agent development, data, science workflows, and system deployment. A Bachelor's degree in Computer Science, Software Engineering, or a related field Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores (FAISS, Pinecone, Weaviate, Qdrant). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development: At Quantanite, you’ll have a personal development plan to help you improve in the areas you’re looking to develop over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. You’ll also have the opportunity to progress internally. As a fast-growing organization, our teams are growing, and you’ll have the chance to take on more responsibility over time. So, if you’re looking for a career full of purpose and potential, we’d love to hear from you!
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description Machine Learning Engineer – RAG & Fine-Tuning This role requires working from our local Hyderabad office 2-3x a week. Location: Hyderabad, Telangana, India About Abc Fitness ABC Fitness (ABC) is the global market leader in providing technology solutions to the fitness industry. Built on a 40+ year reputation of excellence, ABC helps fitness providers of all sizes and backgrounds to turn their visions into seamless reality. Founded in 1981, ABC serves 40 million+ members globally, processing over $11B+ in payments annually for 31,000 clubs across 92+ countries. Our integrated suite includes best-of-breed platforms: Evo, Glofox, Ignite, and Trainerize. As a Thoma Bravo portfolio company, ABC is backed by the leading private equity firm focused on enterprise software. Learn more at abcfitness.com. About The Team The AI Platform Engineering team at ABC builds scalable, high-performance AI systems that power next-generation fitness technology. We specialize in retrieval-augmented generation (RAG) architectures and fine-tuning methodologies to deliver context-aware, cost-efficient AI solutions. As our Machine Learning Engineer, you will be responsible for all retrieval and intelligence behind the LLM, delivering performant, low-cost, high-context AI features. At ABC, we love entrepreneurs because we are entrepreneurs. We roll our sleeves up, we act fast, and we learn together. What You’ll Do Handle embeddings and chunking strategies to optimize document and data retrieval for GenAI-powered features. Manage vector stores and retrieval workflows using leading vector databases (Pinecone, FAISS, Weaviate, Azure AI Search) to ensure efficient, scalable access to unstructured and structured data. Fine-tune small and large language models using frameworks such as HuggingFace and OpenAI APIs, tailoring models to domain-specific requirements and improving performance on targeted tasks. Optimize cost and reduce latency by implementing best practices for token management, model evaluation, and cloud resource utilization. Collaborate with engineering, product, and data teams to integrate RAG pipelines into production systems, ensuring reliability, scalability, and security. Stay up-to-date with the latest advancements in retrieval-augmented generation, vector search, and LLM fine-tuning, applying new techniques to improve system performance and user experience. What You’ll Need 4–7 years of experience in machine learning or AI engineering, with a proven track record in RAG, vector search, and LLM fine-tuning. Deep expertise with vector databases such as Pinecone, FAISS, Weaviate, or Azure AI Search, including experience designing retrieval workflows and managing embeddings. Familiarity with HuggingFace and OpenAI fine-tuning APIs, and strong understanding of chunking strategies for optimizing retrieval. Proficiency in Python and experience with ML frameworks (PyTorch, TensorFlow) and cloud platforms (AWS, Azure). Understanding of token management, evaluation tuning, and cost optimization for large-scale AI deployments. Strong problem-solving skills, a collaborative mindset, and the ability to communicate complex technical concepts to both technical and non-technical stakeholders. AND IT’S GREAT TO HAVE Experience with NLP, NLU, and NLG techniques for conversational AI or information retrieval. Exposure to ML Ops tools for model monitoring, evaluation, and deployment (ML flow, Weights & Biases). Experience with model compression, quantization, or other efficiency techniques. Certifications in AWS Machine Learning Specialty or Microsoft AI Engineer. WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Posted 2 weeks ago
4.0 years
0 Lacs
India
Remote
Location: India Remote Employment Type: Full-time Experience Level: Mid to Senior (4-5+ years) Date of Joining : Required Immediate Joiners About UltraSafeAI UltraSafeAI is a US-based technology company at the forefront of developing secure, reliable, and explainable AI systems. We specialize in proprietary AI technologies including advanced LLMs, CNNs, VLLMs, intelligent agents, computer vision systems, and cutting-edge ML algorithms. Our focus is on B2B AI adoption, providing end-to-end integration using our proprietary technology stack to automate entire business processes. We create intelligent solutions that prioritize safety, transparency, and human alignment across various industries including healthcare, finance, legal, and enterprise services. Our mission is to enable seamless AI adoption while maintaining the highest standards of safety and ethical considerations. Position Overview We're seeking an experienced AI Research Engineer specializing in model training, fine-tuning, and optimization of large language models (LLMs), vision language models (VLMs), and convolutional neural networks (CNNs). The ideal candidate will have deep expertise in training and fine-tuning foundation models using advanced techniques and frameworks, with particular emphasis on reinforcement learning approaches for alignment. As an AI Research Engineer at UltraSafeAI, you'll work on developing and enhancing our proprietary models, creating domain-specific adaptations, and optimizing inference performance. You'll collaborate with our engineering team to build the core AI capabilities that power our enterprise solutions and set new standards for AI performance and safety in business applications. Key Responsibilities ● Train and fine-tune large language models (LLMs) and vision language models (VLMs) using state-of-the-art techniques ● Implement and improve reinforcement learning methods for model alignment, including DPO, PPO, and GRPO ● Develop and optimize CNNs for specialized computer vision tasks in enterprise contexts ● Work with distributed training frameworks for efficient model development ● Optimize models for inference using libraries like VLLM, SGLANG, and Triton Inference Server ● Implement techniques for reducing model size while maintaining performance (quantization, distillation, pruning) ● Create domain-specific adaptations of foundation models for vertical industry applications ● Design and execute experiments to evaluate model performance and alignment ● Develop benchmarks and metrics to measure improvements in model capabilities ● Collaborate with data science team on dataset curation and preparation for training ● Document model architectures, training procedures, and experimental results ● Stay current with the latest research in model training and alignment techniques Required Qualifications ● 4-5+ years of professional experience in AI/ML engineering with focus on model training ● Strong expertise in training and fine-tuning large language models ● Experience with major deep learning frameworks (PyTorch, TensorFlow, JAX) ● Hands-on experience with model training libraries and frameworks (Transformers, NeMo, Megatron-LM, TRL) ● Practical implementation of reinforcement learning techniques for model alignment (DPO, PPO, GRPO) ● Experience optimizing models for efficient inference ● Strong understanding of distributed training techniques for large models ● Background in computer vision model development (CNNs, vision transformers) ● Excellent programming skills in Python and related data science tools ● Experience with cloud infrastructure for ML workloads (AWS, GCP, or Azure) ● Strong problem-solving skills and scientific mindset ● Excellent documentation and communication abilities Highly Desirable ● Experience with multimodal models combining text, vision, and other modalities ● Knowledge of model quantization techniques (QLoRA, GPTQ, AWQ) ● Experience with VLLM, SGLANG, or Triton Inference Server for optimized inference ● Background in prompt engineering and instruction tuning ● Familiarity with RLHF (Reinforcement Learning from Human Feedback) ● Experience with model evaluation and red-teaming ● Knowledge of AI safety and alignment research ● Experience with domain-specific model adaptation for industries like healthcare, finance, or legal ● Background in research with publications or contributions to open-source ML projects ● Experience with MLOps tools and practices for model lifecycle management Why Join UltraSafeAI? ● Create Proprietary AI Models: Develop the core AI technologies that power our enterprise solutions ● 100% Remote Work: Work remotely with our US-based company with flexible hours ● B2B Impact: Help shape AI models that solve real business problems across industries ● Cutting-Edge Research: Work on the frontier of AI model development and alignment ● Continuous Learning: Regular knowledge sharing sessions and education stipend ● Collaborative Team: Work with talented researchers and engineers focused on innovation ● Work-Life Balance: Flexible PTO policy and respect for personal time ● Career Growth: Clear paths for advancement in a rapidly growing company ● Industry-Leading Infrastructure: Access to high-performance computing resources for model training
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Objectives of this role Design and develop efficient computer vision applications for security and surveillance domain. Develop computer vision applications and algorithms for deploying on low power embedded devices. Collaborate with firmware engineers, front-end engineers, QA Engineers and architects on production systems and applications. Identify differences in data distribution that could potentially affect model performance in real-world applications Ensure algorithms generate accurate predictions. Stay up to date with developments in the machine learning industry. Do Data versioning as well as model versioning of the collected data and developed models. Responsibilities Skills and qualifications Extensive math and computer skills, with a deep understanding of probability, statistics, and algorithms. Familiarity with deploying deep learning models on low power embedded devices. Good knowledge of programming with C and C++ is must. Proven record of working with AI Accelerators, NPU and quantization frameworks like OpenVINO or Neuralmagic. In-depth knowledge of TF or PyTorch. Familiarity of ArmNN, Kendryte NNcase, Maix Sipeed or RKNN toolkits. Good knowledge of version control systems like Git, Azure Repos. Familiarity with data structures, data modeling, and software architecture. Impeccable analytical and problem-solving skills Qualifications Preferred qualifications Proven experience as a machine learning engineer or similar role Bachelor’s degree (or equivalent) in computer science, mathematics, or related field About Us Honeywell helps organizations solve the world's most complex challenges in automation, the future of aviation and energy transition. As a trusted partner, we provide actionable solutions and innovation through our Aerospace Technologies, Building Automation, Energy and Sustainability Solutions, and Industrial Automation business segments – powered by our Honeywell Forge software – that help make the world smarter, safer and more sustainable.
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview: We are seeking an Embedded AI Software Engineer with deep expertise in writing software for resource-constrained edge hardware. This role is critical to building optimized pipelines that leverage media encoders/decoders, hardware accelerators, and AI inference runtimes on platforms like NVIDIA Jetson, Hailo, and other edge AI SoCs. You will be responsible for developing highly efficient, low-latency modules that run on embedded devices, involving deep integration with NVIDIA SDKs (Jetson Multimedia, DeepStream, TensorRT) and broader GStreamer pipelines. Key Responsibilities: Media Pipeline & AI Model Integration Implement hardware-accelerated video processing pipelines using GStreamer, V4L2, and custom media backends. Integrate AI inference engines using NVIDIA TensorRT, DeepStream SDK, or similar frameworks (ONNX Runtime, OpenVINO, etc.). Profile and optimize model loading, preprocessing, postprocessing, and buffer management for edge runtime. System-Level Optimization Design software within strict memory, compute, and power budgets specific to edge hardware. Utilize multimedia capabilities (ISP, NVENC/NVDEC) and leverage DMA, zero-copy mechanisms where applicable. Implement fallback logic and error handling for edge cases in live deployment conditions. Platform & Driver-Level Work Work closely with kernel modules, device drivers, and board support packages to tune performance. Collaborate with hardware and firmware teams to validate system integration. Contribute to device provisioning, model updates, and boot-up behavior for AI edge endpoints. Required Skills & Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Engineering, Electronics, Embedded Systems, or related fields. Professional Experience: 2–4 years of hands-on development for edge/embedded systems using C++ (mandatory). Demonstrated experience with NVIDIA Jetson or equivalent edge AI hardware platforms. Technical Proficiency: Proficient in C++11/14/17 and multi-threaded programming. Strong understanding of video codecs, media IO pipelines, and encoder/decoder frameworks. Experience with GStreamer, V4L2, and multimedia buffer handling. Familiarity with TensorRT, DeepStream, CUDA, and NVIDIA’s multimedia APIs. Exposure to other runtimes like HailoRT, OpenVINO, or Coral Edge TPU SDK is a plus. Bonus Points Familiarity with build systems (CMake, Bazel), cross-compilation, and Yocto. Understanding of AI model quantization, batching, and layer fusion for performance. Prior experience working with camera bring-up, video streaming, and inference on live feeds. Contact Information: To apply, please send your resume and portfolio details to hire@condor-ai.com with “Application: Embedded AI Software Engineer” in the subject line. About Condor AI: Condor is an AI engineering company where we use artificial intelligence models to deploy solutions in the real world. Our core strength lies in Edge AI, combining custom hardware with optimized software for fast, reliable, on device intelligence. We work across smart cities, industrial automation, logistics, and security, with a team that brings over a decade of experience in AI, embedded systems, and enterprise grade solutions. We operate lean, think globally, and build for production from system design to scaled deployment.
Posted 2 weeks ago
0 years
7 - 8 Lacs
Hyderābād
Remote
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Lead Consultant - ML/CV Ops Engineer ! We are seeking a highly skilled ML CV Ops Engineer to join our AI Engineering team. This role is focused on operationalizing Computer Vision models—ensuring they are efficiently trained, deployed, monitored , and retrained across scalable infrastructure or edge environments. The ideal candidate has deep technical knowledge of ML infrastructure, DevOps practices, and hands-on experience with CV pipelines in production. You’ll work closely with data scientists, DevOps, and software engineers to ensure computer vision models are robust, secure, and production-ready always. Key Responsibilities: End-to-End Pipeline Automation: Build and maintain ML pipelines for computer vision tasks (data ingestion, preprocessing, model training, evaluation, inference). Use tools like MLflow , Kubeflow, DVC, and Airflow to automate workflows. Model Deployment & Serving: Package and deploy CV models using Docker and orchestration platforms like Kubernetes. Use model-serving frameworks (TensorFlow Serving, TorchServe , Triton Inference Server) to enable real-time and batch inference. Monitoring & Observability: Set up model monitoring to detect drift, latency spikes, and performance degradation. Integrate custom metrics and dashboards using Prometheus, Grafana, and similar tools. Model Optimization: Convert and optimize models using ONNX, TensorRT , or OpenVINO for performance and edge deployment. Implement quantization, pruning, and benchmarking pipelines. Edge AI Enablement (Optional but Valuable): Deploy models on edge devices (e.g., NVIDIA Jetson, Coral, Raspberry Pi) and manage updates and logs remotely. Collaboration & Support: Partner with Data Scientists to productionize experiments and guide model selection based on deployment constraints. Work with DevOps to integrate ML models into CI/CD pipelines and cloud-native architecture. Qualifications we seek in you! Minimum Qualifications Bachelor’s or Master’s in Computer Science , Engineering, or a related field. Sound experience in ML engineering, with significant work in computer vision and model operations. Strong coding skills in Python and familiarity with scripting for automation. Hands-on experience with PyTorch , TensorFlow, OpenCV, and model lifecycle tools like MLflow , DVC, or SageMaker. Solid understanding of containerization and orchestration (Docker, Kubernetes). Experience with cloud services (AWS/GCP/Azure) for model deployment and storage. Preferred Qualifications: Experience with real-time video analytics or image-based inference systems. Knowledge of MLOps best practices (model registries, lineage, versioning). Familiarity with edge AI deployment and acceleration toolkits (e.g., TensorRT , DeepStream ). Exposure to CI/CD pipelines and modern DevOps tooling (Jenkins, GitLab CI, ArgoCD ). Contributions to open-source ML/CV tooling or experience with labeling workflows (CVAT, Label Studio). Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 16, 2025, 3:14:00 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: TrueFan is at the forefront of AI-driven content generation, leveraging cutting-edge generative models to build next-generation products. Our mission is to redefine content generation space through advanced AI technologies, including deep generative models, text-to-video and image-to-video and lipsync generation. We are looking for a Senior Machine Learning Engineer with deep expertise in generative AI, including diffusion models, 3D VAEs and GANs to drive our research and development in AI-generated content and real-time media synthesis . Job Description: As a Senior Machine Learning Engineer , you will be responsible for designing, developing, and deploying cutting-edge models for end-to-end content generation , including AI-driven image/video generation, lipsyncing, and multimodal AI systems . You will work on the latest advancements in deep generative modeling to create highly realistic and controllable AI-generated media. Responsibilities: Research & Develop : Design and implement state-of-the-art generative models , including Diffusion Models, 3D VAEs and GANs for AI-powered media synthesis . End-to-End Content Generation : Build and optimize AI pipelines for high-fidelity image/video generation and lipsyncing using diffusion and autoencoder models. Speech & Video Synchronization : Develop advanced lipsyncing and multimodal generation models that integrate speech, video, and facial animation for hyper-realistic AI-driven content. Real-Time AI Systems : Implement and optimize models for real-time content generation and interactive AI applications using efficient model architectures and acceleration techniques . Scaling & Production Deployment : Work closely with software engineers to deploy models efficiently on cloud-based architectures (AWS, GCP, or Azure) . Collaboration & Research : Stay ahead of the latest trends in deep generative models, diffusion models, and transformer-based vision systems to enhance AI-generated content quality. Experimentation & Validation : Design and conduct experiments to evaluate model performance, improve fidelity, realism, and computational efficiency , and refine model architectures. Code Quality & Best Practices : Participate in code reviews, improve model efficiency, and document research findings to enhance team knowledge-sharing and product development . Qualifications: Bachelor's or Master’s degree in Computer Science, Machine Learning, or a related field. 3+ years of experience working with deep generative models , including Diffusion Models, 3D VAEs, GANs and autoregressive models . Strong proficiency in Python and deep learning frameworks such as PyTorch. Expertise in multi-modal AI, text-to-image, and image-to-video generation , audio to lipsync Strong understanding of machine learning principles and statistical methods. Good to have experience in real-time inference optimization, cloud deployment, and distributed training . Strong problem-solving abilities and a research-oriented mindset to stay updated with the latest AI advancements. Familiarity with generative adversarial techniques, reinforcement learning for generative models, and large-scale AI model training . Preferred Qualifications: Experience with transformers and vision-language models (e.g., CLIP, BLIP, GPT-4V). Background in text-to-video generation, lipsync generation and real-time synthetic media applications . Experience in cloud-based AI pipelines (AWS, Google Cloud, or Azure) and model compression techniques (quantization, pruning, distillation) . Contributions to open-source projects or published research in AI-generated content, speech synthesis, or video synthesis . How to Apply: Interested candidates should submit their resume and a cover letter detailing their experience with generative models and their contributions to relevant projects at meghna.sidda@true-fan.in
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About The Role As an ideal candidate, you must be a problem solver with solid experience and knowledge in AI/ML development. You must be passionate in understanding the business context for features built to drive better customer experience and adoption. Requirements 3-4 years of professional experience in AI/ML development, with a strong focus on Computer Vision techniques such as CNNs, object detection, segmentation, and image processing. Proficient in Python and deep learning frameworks like TensorFlow and PyTorch. Solid understanding of classical and modern machine learning algorithms (supervised, unsupervised, reinforcement learning). Experience with data preprocessing, annotation, and augmentation techniques for image datasets. Familiarity with model evaluation metrics, hyperparameter tuning, and optimization strategies. Practical experience deploying ML models in cloud environments (AWS, GCP, Azure) or on edge devices. Knowledge of MLOps practices, including CI/CD pipelines, model versioning, and monitoring. Strong debugging and problem-solving skills, with the ability to work effectively in cross-functional teams. Understanding of software engineering best practices, including version control (Git), testing, and documentation. Experience with containerization technologies like Docker; familiarity with Kubernetes is a plus. Comfortable working with SQL/NoSQL databases for data storage and retrieval. Preferred Skills Experience with model compression and optimization techniques such as quantization and pruning for deployment efficiency. Exposure to other AI domains like NLP or time series analysis is a bonus. Strong communication skills to explain complex technical concepts to diverse audiences. (ref:hirist.tech)
Posted 2 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git. Tech Skills (10+ Years Experience): Machine Learning (ML) & Deep Learning Solid understanding of supervised and unsupervised learning. Proficiency with deep learning architectures like Transformers, LSTMs, RNNs, etc. Generative AI: Hands-on experience with models such as OpenAI GPT4, Anthropic Claude, LLama etc. Knowledge of fine-tuning and optimizing large language models (LLMs) for specific tasks. Natural Language Processing (NLP): Expertise in NLP techniques, including text preprocessing, tokenization, embeddings, and sentiment analysis. Familiarity with NLP tasks such as text classification, summarization, translation, and question-answering. Retrieval-Augmented Generation (RAG): In-depth understanding of RAG pipelines, including knowledge retrieval techniques like dense/sparse retrieval. Experience integrating generative models with external knowledge bases or databases to augment responses. Data Engineering: Ability to build, manage, and optimize data pipelines for feeding large-scale data into AI models. Search and Retrieval Systems: Experience with building or integrating search and retrieval systems, leveraging knowledge of Elasticsearch, AI Search, ChromaDB, PGVector etc. Prompt Engineering: Expertise in crafting, fine-tuning, and optimizing prompts to improve model output quality and ensure desired results. Understanding how to guide large language models (LLMs) to achieve specific outcomes by using different prompt formats, strategies, and constraints. Knowledge of techniques like few-shot, zero-shot, and one-shot prompting, as well as using system and user prompts for enhanced model performance. Programming & Libraries: Proficiency in Python and libraries such as PyTorch, Hugging Face, etc. Knowledge of version control (Git), cloud platforms (AWS, GCP, Azure), and MLOps tools. Database Management: Experience working with SQL and NoSQL databases, as well as vector databases APIs & Integration: Ability to work with RESTful APIs and integrate generative models into applications. Evaluation & Benchmarking: Strong understanding of metrics and evaluation techniques for generative models. (ref:hirist.tech)
Posted 2 weeks ago
50.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us At Digilytics, we build and deliver easy to use AI products to the secured lending and consumer industry sectors. In an ever-crowded world of clever technology solutions looking for a problem to solve, our solutions start with a keen understanding of what creates and what destroys value in our clients business. Founded by Arindom Basu, the leadership of Digilytics is deeply rooted in leveraging disruptive technology to drive profitable business growth. With over 50 years of combined experience in technology-enabled change, the Digilytics leadership is focused on building a values-first firm that will stand the test of time. We are currently focussed on developing a product, Digilytics RevEL, to revolutionise loan origination for secured lending covering mortgages, motor and business lending. The product leverages the latest AI techniques to process loan application and loan documents to deliver improved customer and colleague experience, while improving productivity and throughput and reducing processing costs. About The Role Digilytics is pioneering the development of intelligent mortgage solutions in International and Indian markets. We are looking for Data Scientist who has strong NLP and computer vision expertise. We are looking for experienced data scientists, who have the aspirations and appetite for working in a start-up environment, and with relevant industry experience to make a significant contribution to our DigilyticsTM platform and solutions. Primary focus would be to apply machine learning techniques for data extraction from documents from variety of formats including scans and handwritten documents. Responsibilities Develop a learning model for high accuracy extraction and validation of documents, e.g. in mortgage industry Work with state-of-the-art language modelling approaches such as transformer-based architectures while integrating capabilities across NLP, computer vision, and machine learning to build robust multi-modal AI solutions Understand the DigilyticsTM vision and help in creating and maintaining a development roadmap Interact with clients and other team members to understand client-specific requirements of the platform Contribute to platform development team and deliver platform releases in a timely manner Liaise with multiple stakeholders and coordinate with our onshore and offshore entities Evaluate and compile the required training datasets from internal and public sources and contribute to the data pre-processing phase. Expected And Desired Skills Either of the following Deep learning frameworks PyTorch (preferred) or Tensorflow Good understanding in designing, developing, and optimizing Large Language Models (LLMs), with hands-on experience in leveraging cutting-edge advancements in NLP and generative AI Skilled in customizing LLMs for domain-specific applications through advanced fine-tuning, prompt engineering, and optimization strategies such as LoRA, quantization, and distillation. Knowledge of model versioning, serving, and monitoring using tools like MLflow, FastAPI, Docker, vLLM. Python used for analytics applications including data pre-processing, EDA, statistical analysis, machine learning model performance evaluation and benchmarking Good scripting and programming skills to integrate with other external applications Good interpersonal skills and the ability to communicate and explain models Ability to work in unfamiliar business areas and to use your skills to create solutions Ability to both work in and lead a team and to deliver and accept peer review Flexible approach to working environment and hours Experience Between 4-6 years of relevant experience Hands-on experience with Python and/or R Machine Learning Deep Learning (desirable) End to End development of a Deep Learning based model covering model selection, data preparation, training, hyper-parameter optimization, evaluation, and performance reporting. Proven experience working in both smaller and larger organisations having multicultural exposure Domain and industry experience by serving customers in one or more of these industries - Financial Services, Professional Services, other Consumer Industries Education Background A Bachelors degree in the fields of study such as Computer Science, Mathematics, Statistics, and Data Science with strong programming content from a leading institute An advanced degree such as a Master's or PhD is an advantage (ref:hirist.tech)
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough