Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a potential candidate for this position, you will be responsible for contributing to cutting-edge AI/ML solutions at Goldman Sachs. Here is a breakdown of the qualifications and attributes you should possess: - A Bachelor's, Master's or PhD degree in Computer Science, Machine Learning, Mathematics, or a related field is required. - Preferably 7+ years of AI/ML industry experience for Bachelors/Masters, 4+ years for PhD, with a focus on Language Models. - Strong foundation in machine learning algorithms, including deep learning architectures like transformers, RNNs, CNNs. - Proficiency in Python and relevant libraries/frameworks such as TensorFlow, PyTorch, Hugging Face Transformers, scikit-learn. - Demonstrated expertise in GenAI techniques, including but not limited to Retrieval-Augmented Generation (RAG), model fine-tuning, prompt engineering, AI agents, and evaluation techniques. - Experience working with embedding models and vector databases. - Experience with MLOps practices, including model deployment, containerization (Docker, Kubernetes), CI/CD, and model monitoring. - Strong verbal and written communication skills. - Curiosity, ownership, and willingness to work in a collaborative environment. - Proven ability to mentor and guide junior engineers. Desirable experience that can set you apart from other candidates includes: - Experience with Agentic Frameworks (e.g., Langchain, AutoGen) and their application to real-world problems. - Understanding of scalability and performance optimization techniques for real-time inference such as quantization, pruning, and knowledge distillation. - Experience with model interpretability techniques. - Prior experience in code reviews/architecture design for distributed systems. - Experience with data governance and data quality principles. - Familiarity with financial regulations and compliance requirements. About Goldman Sachs: At Goldman Sachs, the commitment is to help clients, shareholders, and communities grow by leveraging people, capital, and ideas. Established in 1869, Goldman Sachs is a prominent global investment banking, securities, and investment management firm headquartered in New York with offices worldwide. Goldman Sachs is dedicated to fostering diversity and inclusion by providing numerous opportunities for personal and professional growth, including training, development, networks, benefits, wellness, personal finance offerings, and mindfulness programs. To learn more about the culture, benefits, and people at Goldman Sachs, visit GS.com/careers. Goldman Sachs is dedicated to providing reasonable accommodations for candidates with special needs or disabilities during the recruiting process. To learn more about accommodations, visit: https://www.goldmansachs.com/careers/footer/disability-statement.html Copyright The Goldman Sachs Group, Inc. 2023. All rights reserved.,
Posted 23 hours ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
As an Artificial Intelligence/Machine Learning Expert, you will be responsible for developing and maintaining web applications using Django and Flask frameworks. You will design and implement RESTful APIs using Django Rest Framework (DRF) and deploy, manage, and optimize applications on AWS services such as EC2, S3, RDS, Lambda, and CloudFormation. Your role will involve building and integrating APIs for AI/ML models into existing systems and creating scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. You will implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases and optimize machine learning models through techniques like hyperparameter tuning, pruning, and quantization. Additionally, you will be responsible for deploying and managing machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. It will be crucial for you to ensure the scalability, performance, and reliability of applications and deployed models. Collaboration with cross-functional teams to analyze requirements and deliver effective technical solutions will be an essential part of your responsibilities. You will also be expected to write clean, maintainable, and efficient code following best practices, conduct code reviews, and provide constructive feedback to peers. Staying up-to-date with the latest industry trends and technologies, particularly in AI/ML, will be necessary to excel in this role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
vadodara, gujarat
On-site
Dharmakit Networks is a premium global IT solutions partner dedicated to innovation and success worldwide. Specializing in website development, SaaS, digital marketing, AI Solutions, and more, we help brands turn their ideas into high-impact digital products. Known for blending global standards with deep Indian insight, we are now stepping into our most exciting chapter yet. Project Ax1 is our next-generation Large Language Model (LLM), a powerful AI initiative designed to make intelligence accessible and impactful for Bharat and the world. Built by a team of AI experts, Dharmakit Networks is committed to developing cost-effective, high-performance AI tailored for India and beyond, enabling enterprises to unlock new opportunities and drive deeper connections. Join us in reshaping the future of AI, starting from India. As a GPU Infrastructure Engineer, you will be at the core of building, optimizing, and scaling the GPU and AI compute infrastructure that powers Project Ax1. Your responsibilities will include designing, deploying, and optimizing GPU infrastructure for large-scale AI workloads, managing GPU clusters across cloud (AWS, Azure, GCP) and on-prem setups, setting up and maintaining model CI/CD pipelines for efficient training and deployment, optimizing LLM inference using TensorRT, ONNX, Nvidia NVCF, and more. You will also be responsible for managing offline/edge deployments of AI models, building and tuning data pipelines to support real-time and batch processing, monitoring model and infra performance for availability, latency, and cost efficiency, and implementing logging, monitoring, and alerting using tools like Prometheus, Grafana, ELK, CloudWatch. Collaboration with AI Experts, ML Experts, backend Experts, and full-stack teams will be essential to ensure seamless model delivery. **Key Responsibilities:** - Design, deploy, and optimize GPU infrastructure for large-scale AI workloads. - Manage GPU clusters across cloud (AWS, Azure, GCP) and on-prem setups. - Set up and maintain model CI/CD pipelines for efficient training and deployment. - Optimize LLM inference using TensorRT, ONNX, Nvidia NVCF, etc. - Manage offline/edge deployments of AI models (e.g., CUDA, Lambda, containerized AI). - Build and tune data pipelines to support real-time and batch processing. - Monitor model and infra performance for availability, latency, and cost efficiency. - Implement logging, monitoring, and alerting using Prometheus, Grafana, ELK, CloudWatch. - Work closely with AI Experts, ML Experts, backend Experts, and full-stack teams to ensure seamless model delivery. **Must-Have Skills And Qualifications:** - Bachelors degree in Computer Science, Engineering, or related field. - Hands-on experience with Nvidia GPUs, CUDA, and deep learning model deployment. - Strong experience with AWS, Azure, or GCP GPU instance setup and scaling. - Proficiency in model CI/CD and automated ML workflows. - Experience with Terraform, Kubernetes, and Docker. - Familiarity with offline/edge AI, including quantization and optimization. - Logging & Monitoring using tools like Prometheus, Grafana, CloudWatch. - Experience with backend APIs, data processing workflows, and ML pipelines. - Experience with Git, collaboration in agile, cross-functional teams. - Strong analytical and debugging skills. - Excellent communication, teamwork, and problem-solving abilities. **Good To Have:** - Experience with Nvidia NVCF, DeepSpeed, vLLM, Hugging Face Triton. - Knowledge of FP16/INT8 quantization, pruning, and other optimization tricks. - Exposure to serverless AI inference (Lambda, SageMaker, Azure ML). - Contributions to open-source AI infrastructure projects or a strong GitHub portfolio showcasing ML model deployment expertise.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You should have expertise in ML/DL, model lifecycle management, and MLOps tools such as MLflow and Kubeflow. Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models is essential. You must possess strong experience in NLP, fine-tuning transformer models, and dataset preparation. Hands-on experience with cloud platforms like AWS, GCP, Azure, and scalable ML deployment tools like Sagemaker and Vertex AI is required. Knowledge of containerization using Docker and Kubernetes, as well as CI/CD pipelines, is expected. Familiarity with distributed computing tools like Spark and Ray, vector databases such as FAISS and Milvus, and model optimization techniques like quantization and pruning is necessary. Additionally, you should have experience in model evaluation, hyperparameter tuning, and model monitoring for drift detection. As a part of your roles and responsibilities, you will be required to design and implement end-to-end ML pipelines from data ingestion to production. Developing, fine-tuning, and optimizing ML models to ensure high performance and scalability is a key aspect of the role. You will be expected to compare and evaluate models using key metrics like F1-score, AUC-ROC, and BLEU. Automation of model retraining, monitoring, and drift detection will be part of your responsibilities. Collaborating with engineering teams for seamless ML integration, mentoring junior team members, and enforcing best practices are also important aspects of the role. This is a full-time position with a day shift schedule from Monday to Friday. The total experience required for this role is 4 years, with at least 3 years of experience in Data Science roles. The work location is in person. Application Question: How soon can you join us ,
Posted 2 weeks ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Description By applying to this position, your application will be considered for all locations we hire for in the United States. Annapurna Labs designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time agoeven yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. Role AWS Neuron is the complete software stack for the AWS Trainium (Trn1/Trn2) and Inferentia (Inf1/Inf2) our cloud-scale Machine Learning accelerators. This role is for a Machine Learning Engineer on one of our AWS Neuron teams: The ML Distributed Training team works side by side with chip architects, compiler engineers and runtime engineers to create, build and tune distributed training solutions with Trainium instances. Experience with training these large models using Python is a must. FSDP (Fully-Sharded Data Parallel), Deepspeed, Nemo and other distributed training libraries are central to this and extending all of this for the Neuron based system is key. ML?Frameworks partners with compiler, runtime, and research experts to make AWS?Trainium and?Inferentia feel native inside the tools builders already lovePyTorch, JAX, and the rapidly evolving vLLM ecosystem. By weaving Neuron?SDK deep into these frameworks, optimizing operators, and crafting targeted extensions, we unlock every teraflop of Annapurnas AI chips for both training and lightning?fast inference. Beyond kernels, we shape next?generation serving by upstreaming new features and driving scalable deployments with vLLM, Triton, and TensorRTturning breakthrough ideas into production?ready AI for millions of customers. The ML Inference team collaborates closely with hardware designers, software optimization experts, and systems engineers to develop and optimize high-performance inference solutions for Inferentia chips. Proficiency in deploying and optimizing ML models for inference using frameworks like TensorFlow, PyTorch, and ONNX is essential. The team focuses on techniques such as quantization, pruning, and model compression to enhance inference speed and efficiency. Adapting and extending popular inference libraries and tools for Neuron-based systems is a key aspect of their work. Key job responsibilities You&aposll join one of our core ML teams - Frameworks, Distributed Training, or Inference - to enhance machine learning capabilities on AWS&aposs specialized AI hardware. Your responsibilities will include improving PyTorch and JAX for distributed training on Trainium chips, optimizing ML models for efficient inference on Inferentia processors, and collaborating with compiler and runtime teams to maximize hardware performance. You&aposll also develop and integrate new features in ML frameworks to support AWS AI services. We seek candidates with strong programming skills, eagerness to learn complex systems, and basic ML knowledge. This role offers growth opportunities in ML infrastructure, bridging the gap between frameworks, distributed systems, and hardware acceleration. About The Team Annapurna Labs was a startup company acquired by AWS in 2015, and is now fully integrated. If AWS is an infrastructure company, then think Annapurna Labs as the infrastructure provider of AWS. Our org covers multiple disciplines including silicon engineering, hardware design and verification, software, and operations. AWS Nitro, ENA, EFA, Graviton and F1 EC2 Instances, AWS Neuron, Inferentia and Trainium ML Accelerators, and in storage with scalable NVMe, Basic Qualifications To qualify, applicants should have earned (or will earn) a Bachelors or Masters degree between December 2022 and September 2025. Working knowledge of C++ and Python Experience with ML frameworks, particularly PyTorch, Jax, and/or vLLM Understanding of parallel computing concepts and CUDA programming Preferred Qualifications Experience in using analytical tools, such as Tableau, Qlikview, QuickSight Experience in building and driving adoption of new tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youre applying in isnt listed, please contact your Recruiting Partner. Company - Annapurna Labs (U.S.) Inc. Job ID: A3029797 Show more Show less
Posted 3 weeks ago
1.0 - 5.0 years
0 Lacs
rajasthan
On-site
As the Gardener at our hotel, your primary responsibility is to maintain and enhance all outdoor and landscaped areas, such as gardens, lawns, pathways, and plant displays. Your diligent efforts will ensure that the hotel grounds remain clean, visually appealing, and well-maintained at all times. By upholding the highest standards of appearance, you will play a crucial role in creating a positive and lasting impression on our guests. Your key responsibilities will include: Garden & Grounds Maintenance: - Planting, watering, pruning, fertilizing, and nurturing flowers, shrubs, trees, and lawns. - Regularly mowing, edging, and caring for lawns and turf areas. - Keeping all outdoor spaces clean and free from weeds, litter, and debris. Seasonal and Special Maintenance: - Adapting planting and maintenance schedules according to seasonal requirements. - Installing seasonal decorations or displays as necessary. - Preparing the gardens and grounds for special events or periods of high traffic. This is a full-time position that offers the benefit of food provided. The work location for this role is on-site.,
Posted 4 weeks ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Computer Vision Engineer at wTVision in Noida, India, you will be an integral part of our development team, focusing on designing and implementing cutting-edge computer vision algorithms and systems. Your role will involve optimizing machine learning algorithms, conducting spatial and temporal analysis of object movements, developing real-time processing algorithms for Vision-AI perception metadata, and collaborating with the product team to define new features and functionalities. Your responsibilities will include analyzing and enhancing the end-to-end accuracy of the computer vision pipeline, staying abreast of emerging technologies, working on scalable software solutions with the software development team, and deploying models on edge devices like NVIDIA Jetson. You will be expected to have a strong academic background in Computer Science or related fields, proficiency in programming languages such as Python and C++, experience in image and video processing, and excellent communication and collaboration skills. Ideal candidates should have hands-on expertise with PyTorch, TensorRT, CuDNN, and Deep Learning frameworks like TensorFlow and Keras. A solid understanding of object-oriented programming, parallel computing, and concepts like Linear Algebra and 3D Geometry is required. Additionally, experience with NVIDIA Deep Stream, Jetson, and CUDA programming, knowledge of cloud platforms such as AWS, GCP, or Azure, familiarity with containerization technologies, and contributions to the computer vision community are considered advantageous. If you are self-motivated, possess strong problem-solving skills, and can deliver high-quality work while handling multiple tasks, we encourage you to apply. A passion for sports and technology will be a plus in this dynamic and innovative work environment.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Senior Machine Learning Engineer at TrueFan, you will be at the forefront of AI-driven content generation, leveraging cutting-edge generative models to build next-generation products. Your mission will be to redefine the content generation space through advanced AI technologies, including deep generative models, text-to-video, image-to-video, and lipsync generation. Your responsibilities will include designing, developing, and deploying cutting-edge models for end-to-end content generation. This will involve working on the latest advancements in deep generative modeling to create highly realistic and controllable AI-generated media. You will research and develop state-of-the-art generative models such as Diffusion Models, 3D VAEs, and GANs for AI-powered media synthesis. Additionally, you will build and optimize AI pipelines for high-fidelity image/video generation and lipsyncing using diffusion and autoencoder models. Furthermore, you will be responsible for developing advanced lipsyncing and multimodal generation models that integrate speech, video, and facial animation for hyper-realistic AI-driven content. Your role will also involve implementing and optimizing models for real-time content generation and interactive AI applications using efficient model architectures and acceleration techniques. Collaboration with software engineers to deploy models efficiently on cloud-based architectures will be a key aspect of your work. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field, along with 3+ years of experience working with deep generative models like Diffusion Models, 3D VAEs, GANs, and autoregressive models. Proficiency in Python and deep learning frameworks such as PyTorch is essential. Strong problem-solving abilities, a research-oriented mindset, and familiarity with generative adversarial techniques are also required. Preferred qualifications include experience with transformers and vision-language models, background in text-to-video generation and lipsync generation, expertise in cloud-based AI pipelines, and contributions to open-source projects or published research in AI-generated content. If you are passionate about AI-driven content generation and have a strong background in generative AI, this is the perfect opportunity for you to drive research and development in AI-generated content and real-time media synthesis at TrueFan.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Senior Machine Learning Engineer, you will have the exciting opportunity to be involved in designing, developing, and deploying cutting-edge models for end-to-end content generation. This includes working on AI-driven image/video generation, lip syncing, and multimodal AI systems. You will be at the forefront of the latest advancements in deep generative modeling, striving to create highly realistic and controllable AI-generated media. Your responsibilities will encompass researching and developing state-of-the-art generative models like Diffusion Models, 3D VAEs, and GANs for AI-powered media synthesis. You will focus on building and optimizing AI pipelines for high-fidelity image/video generation and lip syncing. Additionally, you will be tasked with developing advanced lip-syncing and multimodal generation models that integrate speech, video, and facial animation for hyper-realistic AI-driven content. Implementing and optimizing models for real-time content generation and interactive AI applications using efficient model architectures and acceleration techniques will also be part of your role. Collaboration with software engineers to deploy models efficiently on cloud-based architectures (AWS, GCP, or Azure) will be crucial. Staying updated with the latest trends in deep generative models, diffusion models, and transformer-based vision systems to enhance AI-generated content quality will be an essential aspect of the role. Furthermore, designing and conducting experiments to evaluate model performance, improve fidelity, realism, and computational efficiency, as well as refining model architectures will be expected. Active participation in code reviews, improving model efficiency, and documenting research findings to enhance team knowledge-sharing and product development will also be part of your responsibilities. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Machine Learning, or a related field. You should have a minimum of 3 years of experience working with deep generative models, such as Diffusion Models, 3D VAEs, GANs, and autoregressive models. Proficiency in Python and deep learning frameworks like PyTorch is essential. Expertise in multi-modal AI, text-to-image, and image-to-video generation, as well as audio to lip sync, is required. A strong understanding of machine learning principles and statistical methods is necessary. It would be beneficial to have experience in real-time inference optimization, cloud deployment, and distributed training. Strong problem-solving abilities and a research-oriented mindset to stay updated with the latest AI advancements are qualities that would be valued. Familiarity with generative adversarial techniques, reinforcement learning for generative models, and large-scale AI model training will also be beneficial. Preferred qualifications include experience with transformers and vision-language models (e.g., CLIP, BLIP, GPT-4V), a background in text-to-video generation, lip-sync generation, and real-time synthetic media applications, as well as experience in cloud-based AI pipelines (AWS, Google Cloud, or Azure) and model compression techniques (quantization, pruning, distillation). Contributions to open-source projects or published research in AI-generated content, speech synthesis, or video synthesis would be advantageous.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As a Landscaping Crew Supervisor, your primary responsibility will be to supervise and guide landscaping crews in various tasks such as planting, lawn care, pruning, and irrigation. You will be required to interpret and execute landscape plans effectively to meet the desired outcomes. Ensuring safety and quality standards are met at all times is crucial in this role. Tracking the progress of landscaping projects and providing detailed reports to the management team will be part of your regular duties. This role requires a full-time commitment and the ability to work in person at the work location in Devanhalli, Karnataka. Therefore, reliable commuting or planning to relocate before starting work is preferred. If you are passionate about landscaping, possess strong leadership skills, and have a keen eye for detail, this position offers an opportunity to showcase your expertise and contribute to creating visually appealing outdoor spaces. Join our team and play a key role in bringing landscape designs to life while adhering to safety and quality standards.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
punjab
On-site
As a Python Machine Learning & AI Developer at Chicmic Studios, you will be an integral part of our dynamic team, bringing your expertise and experience to develop cutting-edge web applications using Django and Flask frameworks. Your primary responsibilities will include designing and implementing RESTful APIs with Django Rest Framework (DRF), deploying and optimizing applications on AWS services, and integrating AI/ML models into existing systems. You will be expected to create scalable machine learning models using PyTorch, TensorFlow, and scikit-learn, implement transformer architectures like BERT and GPT for NLP and advanced AI use cases, and optimize models through techniques such as hyperparameter tuning, pruning, and quantization. Additionally, you will deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker, ensuring the scalability, performance, and reliability of both applications and models. Collaboration with cross-functional teams to analyze requirements, deliver technical solutions, and staying up-to-date with the latest industry trends in AI/ML will also be key aspects of your role. Your ability to write clean, efficient code following best practices, conduct code reviews, and provide constructive feedback to peers will contribute to the success of our projects. To be successful in this role, you should possess a Bachelor's degree in Computer Science, Engineering, or a related field, with at least 3 years of professional experience as a Python Developer. Proficiency in Python, Django, Flask, and AWS services is required, along with expertise in machine learning frameworks, transformer architectures, and database technologies. Familiarity with MLOps practices, front-end technologies, and strong problem-solving skills are also desirable qualities for this position. If you are passionate about leveraging your Python development skills and AI expertise to drive innovation and deliver impactful solutions, we encourage you to apply and be a part of our innovative team at Chicmic Studios.,
Posted 1 month ago
5.0 - 8.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Overview We are seeking a highly skilled and motivated Data Scientist (LLM Specialist) to join our AI/ML team. This role is ideal for an individual passionate about Large Language Models (LLMs) , workflow automation, and customer-centric AI solutions. You will be responsible for building robust ML pipelines , designing scalable workflows, interfacing with customers, and independently driving research and innovation in the evolving agentic AI space . Key Responsibilities: LLM Development & Optimization: Train, fine-tune, evaluate, and deploy Large Language Models (LLMs) for various customer-facing applications. Pipeline & Workflow Development: Build scalable machine learning workflows and pipelines that facilitate efficient data ingestion, model training, and deployment. Model Evaluation & Performance Tuning: Implement best-in-class evaluation metrics to assess model performance, optimize for efficiency, and mitigate biases in LLM applications. Customer Engagement: Collaborate closely with customers to understand their needs, design AI-driven solutions , and iterate on models to enhance user experiences. Research & Innovation: Stay updated on the latest developments in LLMs, agentic AI , reinforcement learning with human feedback (RLHF), and generative AI applications. Recommend novel approaches to improve AI-based solutions. Infrastructure & Deployment: Work with MLOps tools to streamline deployment and serve models efficiently using cloud-based or on-premise architectures, including Google Vertex AI for model training, deployment, and inference. Foundational Model Training: Experience working with open-weight foundational models , leveraging pre-trained architectures, fine-tuning on domain-specific datasets, and optimizing models for performance and cost-efficiency. Cross-Functional Collaboration: Partner with engineering, product, and design teams to integrate LLM-based solutions into customer products seamlessly. Ethical AI Practices: Ensure responsible AI development by addressing concerns related to bias, safety, security, and interpretability in LLMs. Responsibilities Experience: experience in ML, NLP, or AI-related roles, with a focus on LLMs and generative AI . Programming Skills: Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch LLM Expertise: Hands-on experience in training, fine-tuning, and deploying LLMs (e.g., OpenAI’s GPT, Meta’s LLaMA, Mistral, or other transformer-based architectures). Foundational Model Knowledge: Strong understanding of open-weight LLM architectures , including training methodologies, fine-tuning techniques, hyperparameter optimization, and model distillation . Data Pipeline Development: Strong understanding of data engineering concepts , feature engineering, and workflow automation using Airflow or Kubeflow . Cloud & MLOps: Experience deploying ML models in cloud environments like AWS, GCP (Google Vertex AI), or Azure using Docker and Kubernetes . Model Serving & Optimization: Proficiency in model quantization, pruning, distillation, and knowledge distillation to improve deployment efficiency and scalability. Research & Problem-Solving: Ability to conduct independent research , explore novel solutions , and implement state-of-the-art ML techniques. Strong Communication Skills: Ability to translate technical concepts into actionable insights for non-technical stakeholders. Version Control & Collaboration: Proficiency in Git, CI/CD pipelines , and working in cross-functional teams . Qualifications Bachelor’s in Computer Science, Machine learning, or related discipline.Master’s preferred Strong background in statistics, machine learning, deep learning and programming necessary. 5+years experience required Experience in solving large-scale real-world industry problems, preferably in collaboration with cross-functional, multi-disciplinary teams Knowledge of statistical programming techniques and languages (e.g., R, Python, Java, etc.) Working knowledge of common machine learning and deep learning approaches (e.g. regression, clustering, classification, dimensionality reduction, supervised and unsupervised techniques, Bayesian reasoning, boosting, random forests, deep learning) and data analysis packages (e.g. scikit-learn, pyclustering, pathways analysis, MLlib) Prior experience with Tensorflow Prior experience in Natural Language Processing using NLTK Retail industry experience desired Experience using cloud compute (e.g. Google Cloud Platform, AWS, Azure) Familiarity with NoSQL databases, graphical analyses, and large-scale data processing frameworks (e.g. Apache Spark) Solid understanding of data structures, software design and architecture Ability to work independently and take initiative, but also a co-operative team player
Posted 2 months ago
3.0 - 7.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Overview We are seeking a highly skilled and motivated Data Scientist (LLM Specialist) to join our AI/ML team. This role is ideal for an individual passionate about Large Language Models (LLMs) , workflow automation, and customer-centric AI solutions. You will be responsible for building robust ML pipelines , designing scalable workflows, interfacing with customers, and independently driving research and innovation in the evolving agentic AI space . Responsibilities LLM Development & Optimization: Train, fine-tune, evaluate, and deploy Large Language Models (LLMs) for various customer-facing applications. Pipeline & Workflow Development: Build scalable machine learning workflows and pipelines that facilitate efficient data ingestion, model training, and deployment. Model Evaluation & Performance Tuning: Implement best-in-class evaluation metrics to assess model performance, optimize for efficiency, and mitigate biases in LLM applications. Customer Engagement: Collaborate closely with customers to understand their needs, design AI-driven solutions , and iterate on models to enhance user experiences. Research & Innovation: Stay updated on the latest developments in LLMs, agentic AI , reinforcement learning with human feedback (RLHF), and generative AI applications. Recommend novel approaches to improve AI-based solutions. Infrastructure & Deployment: Work with MLOps tools to streamline deployment and serve models efficiently using cloud-based or on-premise architectures, including Google Vertex AI for model training, deployment, and inference. Foundational Model Training: Experience working with open-weight foundational models , leveraging pre-trained architectures, fine-tuning on domain-specific datasets, and optimizing models for performance and cost-efficiency. Cross-Functional Collaboration: Partner with engineering, product, and design teams to integrate LLM-based solutions into customer products seamlessly. Ethical AI Practices: Ensure responsible AI development by addressing concerns related to bias, safety, security, and interpretability in LLMs. Programming Skills: Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch LLM Expertise: Hands-on experience in training, fine-tuning, and deploying LLMs (e.g., OpenAI’s GPT, Meta’s LLaMA, Mistral, or other transformer-based architectures). Foundational Model Knowledge: Strong understanding of open-weight LLM architectures , including training methodologies, fine-tuning techniques, hyperparameter optimization, and model distillation . Data Pipeline Development: Strong understanding of data engineering concepts , feature engineering, and workflow automation using Airflow or Kubeflow . Cloud & MLOps: Experience deploying ML models in cloud environments like AWS, GCP (Google Vertex AI), or Azure using Docker and Kubernetes . Model Serving & Optimization: Proficiency in model quantization, pruning, distillation, and knowledge distillation to improve deployment efficiency and scalability. Research & Problem-Solving: Ability to conduct independent research , explore novel solutions , and implement state-of-the-art ML techniques. Strong Communication Skills: Ability to translate technical concepts into actionable insights for non-technical stakeholders. Version Control & Collaboration: Proficiency in Git, CI/CD pipelines , and working in cross-functional teams . Qualifications Bachelor’s degree. Advanced degree–masters or PhD-strongly preferred in Statistics, Mathematics, Data / Computer Science or related discipline 2-5 years experience Statistics modeling and algorithms Machine Learning Experience–including deep learning and neural networks, genetics algorithm etc. Working knowledge Big Data–Hadoop, Cassandra,Spark R. Hands-on experience preferred Data Mining Data Visualization and visualization and analysis tools including R Work/Project experience in sensors, IoT, mobile industry highly preferred Excellent verbal and written communication Comfortable with presenting to senior management and CxO level executives Self motivated and self starter with high degree of work ethic
Posted 2 months ago
5.0 - 10.0 years
2 - 5 Lacs
Hyderabad
Work from Office
To ensure the site is functioning smoothly Key Responsibilities Perform a variety of diversified gardening duties to enhance and maintain the premise aesthetics Operate powered lawnmowers (Lawn mowing function includes cutting steeply graded areas) Plant, weed, cultivate and spread mulch in landscaped areas Prepare holes and trenches as instructed for installing posts, conduits, etc Use chemicals (pesticide, insecticidexe2x80xa6) properly Trim and prune shrubbery and trees Maintain tools, equipment, and work area in a safe, clean and orderly condition, following all prescribed regulations Prioritize urgency of job requests Perform visual inspections for the detection of abnormalities Estimate time and materials required on work order Make daily rounds of garden area Attend all scheduled staff training and safety meetings Know current safety regulations Qualifications: Prior landscaping experience Basic knowledge of landscaping equipment necessary Ability to communicate and collaborate with facility personnel Pro-active Disciplined Organized Service-attitude
Posted 2 months ago
0.0 - 3.0 years
1 - 3 Lacs
Palghar
Work from Office
Responsibilities: * Manage farm operations: planting, harvesting, fertilizing * Oversee seed production: cultivation, processing, packaging * Ensure organic farming practices: soil preparation, pest control * Maintain farm records and inventory Free meal
Posted 2 months ago
1.0 - 6.0 years
18 - 22 Lacs
Bengaluru
Work from Office
Job Area: Engineering Group, Engineering Group > Systems Engineering General Summary: Summary - We are seeking experts with a robust background in the field of deep learning (DL) to design state-of-the-art low-level perception (LLP) as well as end-to-end AD models, with a focus on achieving accuracy-latency Pareto optimality. This role involves comprehending state-of-the-art research in this field and deploying networks on the Qualcomm Ride platform for L2/L3 Advanced Driver Assistance Systems (ADAS) and autonomous driving. The ideal candidate must be well-versed in recent advancements in Vision Transformers (Cross-attention, Self-attention), lifting 2D features to Bird's Eye View (BEV) space, and their applications to multi-modal fusion. This position offers extensive opportunities to collaborate with advanced R&D teams of leading automotive Original Equipment Manufacturers (OEMs) as well as Qualcomm's internal stack teams. The team is responsible for enhancing the speed, accuracy, power consumption, and latency of deep networks running on Snapdragon Ride AI accelerators. A thorough understanding of machine learning algorithms, particularly those related to automotive use cases (autonomous driving, vision, and LiDAR processing ML algorithms), is essential. Research experience in the development of efficient networks, various Neural Architecture Search (NAS) techniques, network quantization, and pruning is highly desirable. Strong communication and interpersonal skills are required, and the candidate must be able to work effectively with various horizontal AI teams. Minimum Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or related field and 1+ years of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.ORMaster's degree in Computer Science, Engineering, Information Systems, or related field and 1+ year of Hardware Engineering, Software Engineering, Systems Engineering, or related work experience.ORPhD in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications: Good at software development with excellent analytical, development, and problem-solving skills. Strong understanding of Machine Learning fundamentals Hands-on experience with deep learning network design and implementation. Ability to define network from scratch in PyTorch, ability to add new loss function, modify network with torch.fx. Adept at version control system like GIT. Experience in neural network quantization, compression, pruning algorithms. Experience in deep learning kernel/compiler optimization Strong communication skills Principal Duties and Responsibilities: Applies Machine Learning knowledge to extend training or runtime frameworks or model efficiency software tools with new features and optimizations. Models, architects, and develops machine learning hardware (co-designed with machine learning software) for inference or training solutions. Develops optimized software to enable AI models deployed on hardware (e.g., machine learning kernels, compiler tools, or model efficiency tools, etc.) to allow specific hardware features; collaborates with team members for joint design and development. Assists with the development and application of machine learning techniques into products and/or AI solutions to enable customers to do the same. Develops, adapts, or prototypes complex machine learning algorithms, models, or frameworks aligned with and motivated by product proposals or roadmaps with minimal guidance from more experienced engineers. Conducts complex experiments to train and evaluate machine learning models and/or software independently.
Posted 3 months ago
2.0 - 7.0 years
2 - 3 Lacs
Gurugram
Work from Office
Plant care and maintenance Garden design and development Plant propagation and cultivation Pest and disease management Soil management and fertilization Irrigation and watering systems management Pruning and training plants
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |