Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
hyderabad, telangana, india
On-site
Skill: Generative AI Engineer Preferred Location: Hyderabad Experience Required: 4 to 8 years Responsibilities: Design, build, and deploy generative AI solutions using LLMs such as OpenAI, Anthropic, Mistral, or open-source models (e.g., LLaMA, Falcon). Fine-tune and customize foundation models using domain-specific datasets and techniques. Develop and optimize prompt engineering strategies to drive accurate and context-aware model responses. Implement model pipelines using Python and ML frameworks such as PyTorch, Hugging Face Transformers, or LangChain. Collaborate with data engineers and MLOps teams to productionalize GenAI models on cloud platforms (Azure/AWS/GCP). Ensure robustness, scalability, and compliance of AI models in deployment environments. Good to have experience in finetuning models. Good to have experience in SLM. Integrate GenAI into enterprise applications via APIs or custom interfaces. Evaluate model performance using quantitative and qualitative metrics, and improve outputs through iterative experimentation. Keep up to date with the latest research in GenAI, foundation models, and relevant open-source tools. Mandatory Skill Sets Generative AI (LLMs, Transformers) Python, PyTorch, Hugging Face Transformers Azure/AWS/GCP cloud platforms LangChain or similar orchestration frameworks REST APIs, FastAPI, Flask ML pipeline tools (MLFlow, Weights & Biases) Git CI/CD for ML (e.g., Azure ML, SageMaker pipelines) Preferred Skill Sets RAG (Retrieval-Augmented Generation) Vector DBs (FAISS, Pinecone, Weaviate) Streamlit/Gradio for prototyping Docker, Kubernetes (for model deployment) Data preprocessing & feature engineering NLP libraries: spaCy, NLTK, Transformers Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
Remote
About Chargebee: Chargebee is a subscription billing and revenue management platform powering some of the fastest-growing brands around the world today, including Calendly, Hopin, Pret-a-Manger, Freshworks, Okta, Study.com and others. Thousands of SaaS and subscription-first businesses process over billions of dollars in revenue every year through the Chargebee platform. Headquartered in San Francisco, USA, our 500+ team members work remotely throughout the world, including India, the Netherlands, Paris, Spain, Australia, and the USA. Chargebee has raised over $480 million in capital and is funded by Accel, Tiger Global, Insight Partners, Steadview Capital, and Sapphire Ventures. And were on a mission to push the boundaries of subscription revenue operations. Not just ours, but every customer and prospective business on a recurring revenue model. Our team builds high-quality and innovative software to enable our customers to grow their revenues powered by the state-of-the-art subscription management platform. Key Roles & Responsibilities Productionise ML workflows : build and maintain data pipelines for feature generation, ML model training, batch scoring, and real?time inference using modern orchestration and container frameworks. Own model serving infrastructure : implement fast, reliable APIs / batch jobs; manage autoscaling, versioning, and rollback strategies Feature?store development : design and operate feature stores and corresponding data pipelines to guarantee trainingserving consistency. CI/CD & DevEx : automate testing, deployment, and monitoring of data and model artefacts; provide templated repos and documentation that let data scientists move from notebook to prod quickly. Observability & quality : instrument data?drift, concept?drift, and performance metrics; set up alerting dashboards to ensure model health. Collaboration & review : work closely with data scientists on model experimentation, production?harden their code, review PRs, and evangelise MLOps best practices across the organisation. Required Skills & Experience 3+ years as a ML / Data Engineer working on large-scale, data-intensive systems in cloud environments (AWS, GCP, or Azure), with proven experience partnering closely with ML teams to deploy models at scale. Proficient in Python plus one of Go / Java / Scala; strong software?engineering fundamentals (testing, design patterns, code review). Hands on experience in Spark and familiarity with streaming frameworks (Kafka, Flink, Spark Structured Streaming) Hands-on experience with workflow orchestrators (Airflow, Dagster, Kubeflow Pipelines, etc.) and container platforms (Docker + Kubernetes/EKS/ECS). Practical knowledge of ML algorithms like XGBoost, LightGBM, transformers and deep learning frameworks like pytorch is preferred Experience with experiment?tracking / ML model?management tools (MLflow, SageMaker, Vertex AI, Weights & Biases) is a plus Benefits: Want to know what it means to work for a company that genuinely cares about you Check out just a few of the benefits we give our employees: We are Globally Local With a diverse team across four continents, and customers in over 60 countries, you get to work closely with a global perspective right from your own neighborhood. We value Curiosity We believe the next great idea might just be around the corner. Perhaps its that random thought you had ten minutes ago. We believe in creating an ecosystem that fosters a desire to seek out hard questions, and then figure out answers to them. Customer! Customer! Customer! Everything we do is driven towards enabling our customers growth. This means no matter what you do, you will always be adding real value to a real business problem. Its a lot of responsibility, but also a lot of fun. If you resonate with Chargebee, have a monstrous appetite for curiosity, and an insatiable urge to learn and build new things, were waiting for you! We value people from all backgrounds and are dedicated to hiring and employing a diverse and inclusive workplace. Come be a part of the Chargebee tribe! Show more Show less
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Location: Chennai, India Job Title: Staff DevOps Engineer (AI Ops / ML Ops) Work Mode: Onsite What You Will Do This role offers an exciting opportunity to work in AI/ML Development and Operations (DevOps) engineering, working within a dynamic team that values reliability and continuous improvement. The successful candidate will contribute to the deployment and maintenance of AI/ML systems in production, gaining hands-on experience with MLOps best practices and infrastructure automation. This position provides a structured environment for developing core competencies in ML system operations, DevOps practices, and production ML monitoring, with guidance from Technical Leadership team. Assist in the deployment and maintenance of machine learning models in production environments under direct supervision, learning containerization technologies like Docker and Kubernetes. Support CI/CD pipeline development for ML workflows, including model versioning, automated testing, and deployment processes using tools like Azure DevOps. Monitor ML model performance, data drift, and system health in production environments, implementing basic alerting and logging solutions. Contribute to infrastructure automation and configuration management for ML systems, learning Infrastructure as Code (IaC) practices with tools like Terraform or CloudFormation. Collaborate with ML engineers and data scientists to operationalize models, ensuring scalability, reliability, and adherence to established MLOps procedures and best practices. What Skills & Experience You Should Bring Required: 8 to 12 Years of professional experience in in DevOps, MLOps, or systems engineering environment. Bachelor's degree in Computer Science, Engineering, Information Technology, or a closely related technical field. Trimble's Professional ladder typically requires four or more years of formal education. Expertise in working with Microsoft Azure and its services including ML/AI (Azure ML, Azure DevOps, etc.) - Must Have Highly proficient in Python or other scripting languages (Shell / Bash / PowerShell / Perl) for automation scripting and system integration (Must have) Strong experience of containerization technologies ( Docker ) and orchestration concepts ( Kubernetes ). Strong experience of DevOps principles and practices, with understanding of CI/CD concepts and system administration. Strong experience with CI/CD tools such as Jenkins, GitHub Actions, or ArgoCD. Experience with monitoring and observability tools (Prometheus, Grafana, Datadog, New Relic). Hands-on with version control systems (Git) and collaborative development workflows. Experience with data engineering concepts and technologies (SQL, NoSQL, ETL pipelines). Experience with MLOps tools and frameworks (Kubeflow, MLflow, Weights & Biases, or similar). Preferred: Experience with other cloud platforms (GCP, AWS) is a plus. Hands-on experience in machine learning concepts and the ML model lifecycle from development to production. Experience of AIOps and incident management platforms like Moogsoft, BigPanda, PagerDuty, or Opsgenie. Working knowledge with model serving frameworks (TensorFlow Serving, TorchServe, ONNX Runtime). Working knowledge of security best practices for ML systems and data governance. About Our Division: Construction Management Solutions (CMS) Trimble's Construction Management Solutions (CMS) division is dedicated to transforming the construction industry. We provide technology solutions that streamline and optimize workflows for preconstruction, project management, and field operations. By connecting the physical and digital worlds, we help our customers improve productivity, efficiency, and project outcomes.
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
chennai, tamil nadu, india
On-site
About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally.Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. It started with one ridiculously good idea to create a different breed of Business Processing Outsourcing (BPO)! We at TaskUs understand that achieving growth for our partners requires a culture of constant motion, exploring new technologies, being ready to handle any challenge at a moment's notice, and mastering consistency in an ever-changing world. What We Offer: At TaskUs, we prioritize our employees well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First. Software Engineer, AI Safety Services The impact you'll make Enable rapid, responsible releases by coding evaluation pipelines that surface safety issues before models reach production. Drive operational excellence through reliable, secure, and observable systems that safety analysts and customers trust. Advance industry standards by contributing to bestpractice libraries, opensource projects, and internal frameworks for testing alignment, robustness, and interpretability. What you'll do Design & implement safety tooling -from automated adversarial test harnesses to driftmonitoring dashboards-using Python, PyTorch/TensorFlow, and modern cloud services. Collaborate across disciplines with fellow engineers, data scientists, and product managers to translate customer requirements into clear, iterative technical solutions. Own quality endtoend: write unit/integration tests, automate CI/CD, and monitor production metrics to ensure reliability and performance. Containerize and deploy services using Docker and Kubernetes, following infrastructureascode principles (Terraform/CDK). Continuously learn new evaluation techniques, model architectures, and security practices share knowledge through code reviews and technical talks. Experiences you'll bring 3+ years of professional software engineering experience, ideally with dataintensive or MLadjacent systems. Demonstrated success shipping production code that supports highavailability services or platforms. Experience working in an agile, collaborative environment, delivering incremental value in short cycles. Technical skills you'll need Languages & ML frameworks: Strong Python plus handson experience with PyTorch or TensorFlow (bonus points for JAX and LLM finetuning). Cloud & DevOps: Comfortable deploying containerized services (Docker, Kubernetes) on AWS, GCP, Azure infrastructureascode with Terraform or CDK. MLOps & experimentation: Familiar with tools such as MLflow, Weights & Biases, or SageMaker Experiments for tracking runs and managing models. Data & APIs: Solid SQL, exposure to at least one NoSQL store, and experience designing or consuming RESTful APIs. Security mindset: Awareness of secure coding and compliance practices (e.g., SOC 2, ISO 27001). LangChain/LangGraph, distributed processing (Spark/Flink), or prior contributions to opensource ML safety projects. Why you'll love this role Enjoy the freedom to work from anywhere - your productivity, your environment Purpose-driven work: Contribute meaningfully to a safer AI future while enabling groundbreaking innovation. Strategic visibility: Directly impact high-profile AI safety initiatives with industry-leading customers. Growth opportunities: Collaborate with top-tier AI talent and help shape an emerging industry. Supportive culture: Enjoy competitive compensation, flexible work arrangements, and significant investment in your professional and personal growth. How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to We invite you to explore all TaskUs career opportunities and apply through the provided URL .
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
???? Were Hiring: AI Engineer ???? Location: Bengaluru (Regular office-based role) ???? Employment Type: Full-Time | 5 Days a Week from Office ???? Qualification: B.E / B.Tech or equivalent degree in Computer Science, IT, or related field Are you passionate about building and scaling production-grade AI/ML systems We&aposre looking for a skilled AI Engineer to join our team in Bengaluru and help drive real-world impact through cutting-edge machine learning and GenAI technologies. ???? Must-Have Skills: 48 years of hands-on experience in designing, building, and deploying production-grade AI/ML solutions Proficiency in Python, PySpark, SQL , and ML libraries like Scikit-learn, XGBoost, LightGBM Cloud-native ML development experience on AWS (SageMaker), GCP (Vertex AI), or Azure ML Strong background in NLP/GenAI frameworks: Hugging Face, LangChain, LlamaIndex Practical knowledge of MLOps tools (MLflow, Weights & Biases, DVC) and deployment frameworks ( FastAPI, Flask, Docker, Kubernetes ) Excellent communication, stakeholder engagement, and team collaboration skills Good-to-Have: Publications, blog posts, or open-source contributions in AI/ML Experience leading AI strategy or owning technical roadmaps Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title: AI Research Engineer Intern (Fresher) Reporting to: Lead Research & Innovation Lab Location: remote/ Hybrid (Chennai, India) Engagement: 6-month, full-time paid internship with pre-placement-offer track 1. Why this role exists Stratsyn AI Technology Services is turbo-charging Stratsyns cloud-native Enterprise Intelligence & Management Suite a modular SaaS ecosystem that fuses advanced AI, low-code automation, multimodal search, and next-generation Virtual workforce agents. The platform unifies strategic planning, document intelligence, workflow orchestration, and real-time analytics, empowering C-suite leaders to simulate scenarios, orchestrate execution, and convert insight into action with unmatched speed and scalability. To keep pushing that frontier, we need sharp, curious minds who can translate cutting-edge research into production-grade capabilities for this suite. This internship is our talent-funnel into future Research Engineer and Product Scientist roles. 2. What youll do (core responsibilities) % FocusKey Responsibility 30 %Rapid Prototyping & Experimentation implement state-of-the-art papers (LLMs, graph learning, causal inference, agents), design ablation studies, benchmark against baselines, and iterate fast. 25 %Data Engineering for Research build reproducible datasets, craft synthetic data when needed, automate ETL pipelines, and enforce experiment tracking (MLflow / Weights & Biases). 20 %Model Evaluation & Explainability create evaluation harnesses (BLEU, ROUGE, MAPE, custom KPIs), visualize error landscapes, and generate executive-ready insights. 15 %Collaboration & Documentation author tech memos, well-annotated notebooks, and contribute to internal knowledge bases; present findings in weekly research stand-ups. 10 %Innovation Scouting scan arXiv, ACL, NeurIPS, ICML, and startup ecosystems; summarize high-impact research and propose areas for IP creation within the Suite. 3. What you will learn / outcomes to achieve Master the end-to-end research workflow: literature review ? hypothesis ? prototype ? validation ? deployment shadow. Deliver one peer-review-quality technical report and two production-grade proof-of-concepts for the Suite. Achieve a measurable impact (e.g., 8-10 % forecasting-accuracy lift or 30 % latency reduction) on a live micro-service. 4. Minimum qualifications (freshers welcome) B.E./B.Tech/M.Sc./M.Tech in CS, Data Science, Statistics, EE, or related (2024-2026 pass-out). Fluency in Python and at least one deep-learning framework (PyTorch preferred). Solid grasp of linear algebra, probability, optimization, and algorithms. Hands-on academic or personal projects in NLP, CV, time-series, or RL (GitHub links highly valued). 5. Preferred extras Publications or Kaggle/ML-competition record. Experience with distributed training (GPU clusters, Ray, Lightning) and experiment-tracking tools. Familiarity with MLOps (Docker, CI/CD, Kubernetes) or data-centric AI. Domain knowledge in supply-chain, fintech, climate, or marketing analytics. 6. Key attributes & soft skills First-principles thinker questions assumptions, proposes novel solutions. Bias for action prototypes in hours, not weeks; embraces agile experimentation. Storytelling ability explains complex models in clear, executive-friendly language. Ownership mentality treats the prototype as a product, not just a demo. 7. Tech stack youll touch Python | PyTorch | Hugging Face | TensorRT | LangChain | Neo4j/GraphDB | PostgreSQL | Airflow | MLflow | Weights & Biases | Docker | GitHub Actions | JAX (exploratory) 8. Internship logistics & perks Competitive monthly stipend + performance bonus. High-end workstation + GPU credits on our private cloud. Dedicated mentor and 30-60-90-day learning plan. Access to premium research portals and paid conference passes. Culture of radical candor, weekly brown-bag tech talks, and hack days. Fast-track to full-time AI Research Engineer upon successful completion. 9. Application process Apply via email: Send rsum, brief statement of purpose, and GitHub/portfolio links to [HIDDEN TEXT] . Online coding assessment: algorithmic + ML fundamentals. Technical interview (2 rounds): deep dive into projects, math, and research reasoning. Culture-fit discussion: with Research Lead & CPO. Offer & onboarding target turnaround < 3 weeks. Show more Show less
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role: We are seeking an experienced MLOps Engineer to lead the deployment, scaling, and performance optimization of open-source Generative AI models on cloud infrastructure. Youll work at the intersection of machine learning, DevOps, and cloud engineering to help productize and operationalize large-scale LLM and diffusion models. Key Responsibilities: Design and implement scalable deployment pipelines for open-source Gen AI models (LLMs, diffusion models, etc.). Fine-tune and optimize models using techniques like LoRA, quantization, distillation, etc. Manage inference workloads, latency optimization, and GPU utilization. Build CI/CD pipelines for model training, validation, and deployment. Integrate observability, logging, and alerting for model and infrastructure monitoring. Automate resource provisioning using Terraform, Helm, or similar tools on GCP/AWS/Azure. Ensure model versioning, reproducibility, and rollback using tools like MLflow, DVC, or Weights & Biases. Collaborate with data scientists, backend engineers, and DevOps teams to ensure smooth production rollouts. Required Skills & Qualifications: 5+ years of total experience in software engineering or cloud infrastructure. 3+ years in MLOps with direct experience in deploying large Gen AI models. Hands-on experience with open-source models (e.g., LLaMA, Mistral, Stable Diffusion, Falcon, etc.). Strong knowledge of Docker, Kubernetes, and cloud compute orchestration. Proficiency in Python and familiarity with model-serving frameworks (e.g., FastAPI, Triton Inference Server, Hugging Face Accelerate, vLLM). Experience with cloud platforms (GCP preferred, AWS or Azure acceptable). Familiarity with distributed training, checkpointing, and model parallelism. Good to Have: Experience with low-latency inference systems and token streaming architectures. Familiarity with cost optimization and scaling strategies for GPU-based workloads. Exposure to LLMOps tools (LangChain, BentoML, Ray Serve, etc.). Why Join Us: Opportunity to work on cutting-edge Gen AI applications across industries. Collaborative team with deep expertise in AI, cloud, and enterprise software. Flexible work environment with a focus on innovation and impact. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |