Jobs
Interviews

1521 Sagemaker Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We seek an experienced Principal Data Scientist to lead our data science team and drive innovation in machine learning, advanced analytics, and Generative AI. This role blends strategic leadership with deep technical expertise across ML engineering, LLMs, deep learning, and multi-agent systems. You will be at the forefront of deploying AI-driven solutions, including agentic frameworks and LLM orchestration, to tackle complex, real-world problems at scale. Primary Stack: Languages: Python, SQL Cloud Platforms: AWS or GCP preferred ML & Deep Learning: PyTorch, TensorFlow, Scikit-learn GenAI & LLM Toolkits: Hugging Face, LangChain, OpenAI APIs, Cohere, Anthropic Agentic & Orchestration Frameworks: LangGraph, CrewAI, Agno, Autogen, AutoGPT Vector Stores & Retrieval: FAISS, Pinecone, Weaviate MLOps & Deployment: MLflow, SageMaker, Vertex AI, Kubeflow, Docker, Kubernetes, Fast API Key Responsibilities: Lead and mentor a team of 10+ data scientists and ML engineers, promoting a culture of innovation, ownership, and cross-functional collaboration. Drive the development, deployment, and scaling of advanced machine learning, deep learning, and GenAI applications across the business. Build and implement agentic architectures and multi-agent systems using tools like LangGraph, CrewAI, and Agno to solve dynamic workflows and enhance LLM reasoning capabilities. Architect intelligent agents capable of autonomous planning, decision-making, tool use, and collaboration. Leverage LLMs and transformer-based models to power solutions in NLP, conversational AI, information retrieval, and decision support. Develop and scale ML pipelines on cloud platforms, ensuring performance, reliability, and reproducibility. Establish and maintain MLOps processes (CI/CD for ML, monitoring, governance) and ensure best practices in responsible AI. Collaborate with product, engineering, and business teams to align AI initiatives with strategic goals. Stay ahead of the curve on AI/ML trends, particularly in the multi-agent and agentic systems landscape, and advocate for their responsible adoption. Present results, insights, and roadmaps to senior leadership and non-technical stakeholders in a clear, concise manner. Qualifications: 9+ years of experience in data science, business analytics, or ML engineering, with 3+ years in a leadership or principal role. Demonstrated experience in architecting and deploying LLM-based solutions in production environments. Deep understanding of deep learning, transformers, and modern NLP. Proven hands-on experience building multi-agent systems using LangGraph, CrewAI, Agno, or related tools. Strong grasp of agent design principles, including memory management, planning, tool selection, and self-reflection loops. Expertise in cloud-based ML platforms (e.g., AWS SageMaker, GCP Vertex AI) and MLOps best practices. Familiarity with retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone). Excellent communication, stakeholder engagement, and cross-functional leadership skills.

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

At Broadridge, we've built a culture where the highest goal is to empower others to accomplish more. If you’re passionate about developing your career, while helping others along the way, come join the Broadridge team. Cloud Architecture & Delivery Design, implement, and oversee scalable, resilient, and secure AI/ML platforms on AWS using SageMaker, Bedrock, and related AWS services. Architect end-to-end cloud solutions that meet security and compliance needs for regulated industries. DevOps, IaC & Automation Automate cloud infrastructure with Terraform and AWS CloudFormation templates, promoting Infrastructure as Code best practices. Build and maintain CI/CD pipelines for AI/ML and data workflows using Jenkins, enabling reliable automated deployments. Apply software engineering rigor and best practices to machine learning, including CI/CD, automation, etc. Experience in architecting containerized deployments and Kubernetes Database & Caching Architecture Design, deploy, and optimize Amazon DynamoDB (NoSQL) and MemoryDB (caching/session management) within secure architectures. Identity, Access, & Security Integrate authentication and authorization across platforms using Siteminder and AWS Cognito. Implement policy-driven controls to meet regulatory, organizational, and industry security standards. Cost Management Drive AWS cost optimization, proactive cost analysis, resource right-sizing, and usage monitoring to maximize value. Reliability Engineering, Observability & Incident Response Embed SRE principles: monitor system health, automate failover, and champion resilient, self-healing infrastructure. Configure and optimize observability tools such as Datadog and Splunk for metric/event collection, distributed tracing, centralized logging, and system dashboards. Implement comprehensive alerting strategies to detect and remediate incidents rapidly. Develop and maintain incident response processes—coordinate troubleshooting and root cause analysis; participate in on-call rotations as needed. Collaboration & Leadership Partner with cross-functional teams (Application Architect, data scientists, engineers, Infosec, SREs) to align technology with business objectives and regulatory requirements. Serve as subject-matter expert and mentor for cloud, automation, SRE and AI/ML best practices. Work closely with cross-functional teams, including Internal Security & Governance Team, Cloud Solutions Group, Architect, developers, QA engineers, and product managers to deliver high-quality software products. Continuous Improvement Research and introduce emerging AWS services and DevOps/SRE tooling to accelerate innovation and improve reliability. We are dedicated to fostering a collaborative, engaging, and inclusive environment and are committed to providing a workplace that empowers associates to be authentic and bring their best to work. We believe that associates do their best when they feel safe, understood, and valued, and we work diligently and collaboratively to ensure Broadridge is a company—and ultimately a community—that recognizes and celebrates everyone’s unique perspective.

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Requirements Bachelor's or Master's degree in Data Science/AI and work experience or PhD in relevant area 5+ Years Experience in Data Science , Machine learning use cases especially in Business areas - Sales, Marketing , Customer Success etc. Hands-on experience operating in Jupyter notebooks or Databricks or Snowflake or AWS Sagemaker ( one of them ) is MUST . Strong experience in writing, analyzing, and troubleshooting SQL. Independent thinkers and doers and not order takers. Experience with operationalizing Data science models in production environments and CI-CD is a plus. Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners Should be able to work in Agile methodology, develop stories, attend stand up

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

BU: Working Professional Job Title: Asst Manager- DS/ML Location: WFH Experience Required: 3-8 years Work schedule: Live Session- Monday to Friday2hrs/per day in the evening, 7:30 pm-9:30 pm Live Session- Saturday & Sunday- 10hrs/per day from 10:00 am- 10:00 pm Key Responsibilities:  Live Instruction: Teach core DS/ML topics (supervised & unsupervised learning, deep learning, model evaluation, MLOps) through interactive sessions.  Curriculum Collaboration: Work with content teams to design labs, code walkthroughs, and real-world case studies using Python, scikit-learn, TensorFlow/PyTorch, and cloud-based DS/ML services.  Learner Support: Field technical questions, debug code, review notebooks, and provide actionable feedback on assignments and projects.  Project Mentorship: Guide capstone work—e.g., image/video models, NLP pipelines, recommendation systems, and deployment pipelines.  Continuous Improvement: Analyze learner performance data to refine modules, introduce emerging topics (e.g., transformers, generative models), and enhance assessments. Requirements:  3+ years of industry or academic experience in DS/ML or AI.  Minimum Master’s degree in DS or ML &AI or CS specialization in AI & DS/ML  Proficiency in Python and ML frameworks (scikit-learn, TensorFlow or PyTorch).  Familiarity with MLOps tools (Docker, Kubernetes, MLflow) and cloud ML services (AWS SageMaker, GCP AI Platform, or Azure ML).  Excellent presentation and mentoring skills in live and small-group settings.  Prior teaching or edtech experience is a strong plus.

Posted 2 days ago

Apply

6.0 years

6 - 9 Lacs

Hyderābād

On-site

CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform: The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities will include: Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You will have: 6+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing

Posted 2 days ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

0 years

8 - 15 Lacs

Hyderābād

On-site

Services: AWS Operations and DevOps Support 24/7 Monitoring Incident Management with SLAs Cost Optimization & Governance Adhere to Security & Compliance Best Practices Automation of provisioning and workflows Deliverables: Monthly reports (usage, cost, incidents) Cloud architecture documentation Security posture and compliance assessments DevOps pipeline maintenance Requirements: AWS Advanced or Premier Partner status preferred Experience with container platform (ECS & EKS) and ML workloads Relevant certifications and references Experience with HPC (parallel cluster, AWS batch)/ ML (SageMaker) for life sciences data workloads is a plus Job Types: Full-time, Permanent Pay: ₹867,326.69 - ₹1,504,417.21 per year Benefits: Health insurance Provident Fund Shift: Day shift Evening shift Work Days: Monday to Friday

Posted 2 days ago

Apply

5.0 years

8 - 18 Lacs

Mohali

Remote

Job Title: AI & ML Developer (Python) Company: ChicMic Studios Location: Mohali, Punjab (Hybrid Options Available) Job Type: Full-Time | 5 Days Working Experience Required: 5+ Years Immediate Joiners Preferred Job Summary: ChicMic Studios is seeking an experienced and innovative AI/ML Developer with strong expertise in Python-based web development and machine learning. The ideal candidate will have 5+ years of hands-on experience with Django, Flask , and cloud deployment on AWS , along with a solid understanding of transformer architectures , model deployment , and MLOps practices . Key Responsibilities: Develop and maintain robust web applications using Django and Flask Build and manage scalable RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications using AWS services : EC2, S3, Lambda, RDS, etc. Design and integrate AI/ML APIs into production systems Build ML models using PyTorch, TensorFlow, Scikit-learn Implement transformer architectures like BERT, GPT for NLP and vision tasks Apply model optimization techniques: quantization, pruning, hyperparameter tuning Deploy models using SageMaker, TorchServe, TensorFlow Serving Ensure high performance and scalability of deployed AI systems Collaborate with cross-functional teams to deliver scalable AI-powered products Follow clean coding practices, conduct code reviews, and stay current with AI/ML advancements Required Skills & Qualifications: Btech/ MCA 5+ years of Python development experience Expertise in Django , Flask , and DRF Solid experience deploying apps and models on AWS Proficiency in PyTorch , TensorFlow , and Scikit-learn Experience with transformer models (BERT, GPT, etc.) Strong knowledge of SQL and NoSQL databases (PostgreSQL, MongoDB) Familiarity with MLOps practices for end-to-end model management Bonus: Basic front-end skills (JavaScript, HTML, CSS) Excellent communication and problem-solving skills Why Join ChicMic Studios? Global exposure across 16+ modern tech stacks High retention culture and innovation-driven environment Opportunity to work on cutting-edge AI/ML and NLP projects EPF, Earned Leaves, Career Growth Support Hybrid/WFH flexibility for exceptional candidates To Apply: Send your resume to: disha.mehta755@chicmicstudios.in Contact: +91 98759 52834 Website: www.chicmicstudios.in Job Type: Full-time Pay: ₹800,000.00 - ₹1,800,000.00 per year Benefits: Flexible schedule Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 5 years (Required) AI: 2 years (Required) Language: English (Required) Work Location: In person

Posted 2 days ago

Apply

5.0 years

3 - 5 Lacs

Vadodara

On-site

Role overview As a Senior AI Architect Engineer, you will be instrumental in designing and orchestrating advanced AI solutions that integrate seamlessly into our business processes and technology ecosystem. Your role will focus on creating scalable, intelligent architectures that drive innovation, operational efficiency, and business transformation. By aligning AI strategy with organizational goals, you will ensure our AI initiatives are robust, adaptable, and positioned to meet future challenges, enabling a competitive advantage in an evolving market. The role To support our strategic growth and innovation agenda, we seek a seasoned a Senior AI Architect Engineer who can lead complex AI projects and foster cross-functional collaboration. Expertise and experience Minimum of 5 years of experience in architecture, machine learning engineering, or data science, with a focus on delivering enterprise-grade AI solutions. Proven track record in designing, deploying, and operationalizing AI/ML models in production environments. Deep familiarity with AI/ML frameworks, and experience with cloud AI platforms like AWS SageMaker, Azure AI, or Google AI Platform. Strong expertise in data engineering, data modelling, and integration, with the ability to architect end-to-end AI pipelines. Proficiency in programming languages including Python, SQL, and experience with containerization (Docker) and orchestration (Kubernetes). Experience with MLOps practices including CI/CD pipelines, model monitoring, and automated retraining. Solid understanding of AI ethics, data privacy, and regulatory compliance. Technical Leadership Lead the design and implementation of scalable AI architectures that solve complex business problems and integrate with existing systems. Define and enforce AI development standards, best practices, and governance frameworks across projects. Drive technology evaluation and selection to ensure the adoption of the most effective AI tools and platforms. Implement robust data security, privacy controls, and ethical AI guidelines within all AI initiatives. Ensure model accuracy, reliability, and maintainability through rigorous validation and continuous monitoring. Foster innovation by integrating emerging AI technologies and methodologies into business solutions. Mentor and guide cross-functional teams, promoting AI literacy and excellence. Architect and maintain cloud-native AI infrastructure to support scalability and agility. Champion automation and orchestration to streamline AI workflows and deployment processes.

Posted 2 days ago

Apply

6.0 years

0 Lacs

India

Remote

We’re Hiring: Machine Learning Engineer (Part-Time | Flexible Remote) Are you an experienced ML Engineer looking for a flexible, part-time opportunity to work on real-world impact projects? Join us in building intelligent systems that match candidates to projects using structured skills, assessments, and feedback data. This is your chance to own end-to-end ML pipelines and work on meaningful automation in the HRTech space — all on your own schedule. 🔍 Role Overview: We’re looking for an ML Engineer to architect and deploy predictive models that power candidate–project matching intelligence , leveraging structured applicant data. You'll design scalable ML workflows and inference pipelines on AWS . 🔧 Key Responsibilities: Build data pipelines to ingest & preprocess applicant data (skills, assessments, feedback) Engineer task-specific features and transformation logic Train predictive models (logistic regression, XGBoost, etc.) on SageMaker Automate batch ETL, retraining flows, and storage with AWS S3 Deploy inference endpoints with Lambda , and integrate with systems in production Monitor model drift, performance, and feedback loops for continuous learning Document architecture, workflows, and ensure explainability ✅ You’ll Need: 4–6+ years in ML/AI engineering roles Strong command of Python, scikit-learn, XGBoost , and feature engineering Proven experience with AWS ML stack : SageMaker, Lambda, S3 Hands-on SQL/NoSQL and automated data workflows Familiarity with CI/CD for ML (CodePipeline, CodeBuild, etc.) Ability to independently own schema, features, and model delivery Bachelor’s or Master’s in CS, Engineering, or related field ⭐ Bonus Points For: NLP experience (extracting features from feedback/comments) Familiarity with serverless architectures Background in recruitment, talent platforms, or skills-matching system 👉 Apply now or DM us to know more. #Hiring #MachineLearning #MLJobs #RemoteJobs #PartTime #AWS #HRTech #MLOps #AI #RecruitmentTech #SageMaker #FlexibleWork

Posted 2 days ago

Apply

2.0 years

0 Lacs

Andhra Pradesh

On-site

We are seeking an experienced and innovative Generative AI Developer to join our AWAC team. In this role, you will lead the design and development of GenAI and Agentic AI applications using state of the art LLMs and AWS native services. You will work on both R&D focused proofof concepts and production grade implementations, collaborating with cross-functional teams to bring intelligent, scalable solutions to life. Key Responsibilities Design, develop, and deploy Generative AI and Agentic AI applications using LLMs such as Claude, Cohere, Titan, and others. Lead the development of proof of concept (PoC) solutions to explore new use cases and validate AI driven innovations. Architect and implement retrieval augmented generation (RAG) pipelines using LangChain and Vector Databases like OpenSearch. Integrate with AWS services including Bedrock API, SageMaker, SageMaker JumpStart, Lambda, EKS/ECS, Amazon Connect, Amazon Q. Apply few shot, one shot, and zero shot learning techniques to fine tune and prompt LLMs effectively. Collaborate with data scientists, ML engineers, and business stakeholders to translate complex requirements into scalable AI solutions. Implement CI/CD pipelines, infrastructure as code using Terraform, and follow DevOps best practices. Optimize performance, cost, and reliability of AI applications in production environments. Document architecture, workflows, and best practices to support knowledge sharing and onboarding. Required Skills & Technologies Experience in Python development, with at least 2 years in AI/ML or GenAI projects. Strong hands on experience with LLMs and Generative AI frameworks. Proficiency in LangChain, Vector DBs (e.g OpenSearch), and prompt engineering. Deep understanding of AWS AI/ML ecosystem: Bedrock, SageMaker, Lambda, EKS/ECS. Experience with serverless architectures, containerization, and cloud native development. Familiarity with DevOps tools: Git, CI/CD, Terraform. Strong debugging, performance tuning, and problem solving skills. Preferred Qualifications Experience with Amazon Q, Amazon Connect, or Amazon Titan. Familiarity with Claude, Cohere, or other foundation models. Bachelors or Master s degree in Computer Science, AI/ML, or a related field. Experience in building agentic workflows and multi agent orchestration is a plus. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 days ago

Apply

2.0 years

0 Lacs

Andhra Pradesh, India

On-site

We are seeking an experienced and innovative Generative AI Developer to join our AWAC team. In this role, you will lead the design and development of GenAI and Agentic AI applications using state of the art LLMs and AWS native services. You will work on both R&D focused proofof concepts and production grade implementations, collaborating with cross-functional teams to bring intelligent, scalable solutions to life. Key Responsibilities Design, develop, and deploy Generative AI and Agentic AI applications using LLMs such as Claude, Cohere, Titan, and others. Lead the development of proof of concept (PoC) solutions to explore new use cases and validate AI driven innovations. Architect and implement retrieval augmented generation (RAG) pipelines using LangChain and Vector Databases like OpenSearch. Integrate with AWS services including Bedrock API, SageMaker, SageMaker JumpStart, Lambda, EKS/ECS, Amazon Connect, Amazon Q. Apply few shot, one shot, and zero shot learning techniques to fine tune and prompt LLMs effectively. Collaborate with data scientists, ML engineers, and business stakeholders to translate complex requirements into scalable AI solutions. Implement CI/CD pipelines, infrastructure as code using Terraform, and follow DevOps best practices. Optimize performance, cost, and reliability of AI applications in production environments. Document architecture, workflows, and best practices to support knowledge sharing and onboarding. Required Skills & Technologies Experience in Python development, with at least 2 years in AI/ML or GenAI projects. Strong hands on experience with LLMs and Generative AI frameworks. Proficiency in LangChain, Vector DBs (e.g OpenSearch), and prompt engineering. Deep understanding of AWS AI/ML ecosystem: Bedrock, SageMaker, Lambda, EKS/ECS. Experience with serverless architectures, containerization, and cloud native development. Familiarity with DevOps tools: Git, CI/CD, Terraform. Strong debugging, performance tuning, and problem solving skills. Preferred Qualifications Experience with Amazon Q, Amazon Connect, or Amazon Titan. Familiarity with Claude, Cohere, or other foundation models. Bachelors or Master s degree in Computer Science, AI/ML, or a related field. Experience in building agentic workflows and multi agent orchestration is a plus.

Posted 2 days ago

Apply

0 years

0 Lacs

Gaya, Bihar, India

On-site

We’re Hiring: DevOps Engineer @ MSG24x7 🚀 Because scaling businesses on SMS and WhatsApp isn’t a quiet job. Are you someone who:: Speak fluent #AWS ☁️ Orchestrate #Kubernetes clusters like symphonies 🎻 Can handle millions of #API calls without breaking a sweat 🔥 Thrive in live environments (pre + post production) And have bonus creds in #SageMaker or #Bedrock - we might just be your people. At MSG24x7, we don’t just build scalable systems - we build systems that empower brands to grow, engage, and deliver. Your work won't sit in a sandbox. It’ll be live, loud, and real from Day 1. 👀 Who we're looking for:: An experienced DevOps Engineer who loves automation, scalability, and solving high-impact challenges.. **Call/WhatsApp: +919031022607

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: We are conducting an in-person hiring drive for the position of Mlops / Data Science in Pune & Bengaluru on 2nd August 2025.Interview Location is mentioned below: Pune – Persistent Systems, Veda Complex, Rigveda-Yajurveda-Samaveda-Atharvaveda Plot No. 39, Phase I, Rajiv Gandhi Information Technology Park, Hinjawadi, Pune, 411057 Bangalore - Persistent Systems, The Cube at Karle Town Center Rd, DadaMastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024 We are looking for an experienced and talented Data Science to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, Mlops Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Mlops, Data Science Job Location: All PSL Location Experience: 5+ Years Job Type: Full Time Employment What You'll Do: Design, build, and manage scalable ML model deployment pipelines (CI/CD for ML). Automate model training, validation, monitoring, and retraining workflows. Implement model governance, versioning, and reproducibility best practices. Collaborate with data scientists, engineers, and product teams to operationalize ML solutions. Ensure robust monitoring and performance tuning of deployed models Expertise You'll Bring: Strong experience with MLOps tools & frameworks (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Proficient in containerization (Docker, Kubernetes). Good knowledge of cloud platforms (AWS, Azure, or GCP). Expertise in Python and familiarity with ML libraries (TensorFlow, PyTorch, scikit-learn). Solid understanding of CI/CD, infrastructure as code, and automation tools. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 2 days ago

Apply

0.0 - 2.0 years

8 - 18 Lacs

Mohali, Punjab

Remote

Job Title: AI & ML Developer (Python) Company: ChicMic Studios Location: Mohali, Punjab (Hybrid Options Available) Job Type: Full-Time | 5 Days Working Experience Required: 5+ Years Immediate Joiners Preferred Job Summary: ChicMic Studios is seeking an experienced and innovative AI/ML Developer with strong expertise in Python-based web development and machine learning. The ideal candidate will have 5+ years of hands-on experience with Django, Flask , and cloud deployment on AWS , along with a solid understanding of transformer architectures , model deployment , and MLOps practices . Key Responsibilities: Develop and maintain robust web applications using Django and Flask Build and manage scalable RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications using AWS services : EC2, S3, Lambda, RDS, etc. Design and integrate AI/ML APIs into production systems Build ML models using PyTorch, TensorFlow, Scikit-learn Implement transformer architectures like BERT, GPT for NLP and vision tasks Apply model optimization techniques: quantization, pruning, hyperparameter tuning Deploy models using SageMaker, TorchServe, TensorFlow Serving Ensure high performance and scalability of deployed AI systems Collaborate with cross-functional teams to deliver scalable AI-powered products Follow clean coding practices, conduct code reviews, and stay current with AI/ML advancements Required Skills & Qualifications: Btech/ MCA 5+ years of Python development experience Expertise in Django , Flask , and DRF Solid experience deploying apps and models on AWS Proficiency in PyTorch , TensorFlow , and Scikit-learn Experience with transformer models (BERT, GPT, etc.) Strong knowledge of SQL and NoSQL databases (PostgreSQL, MongoDB) Familiarity with MLOps practices for end-to-end model management Bonus: Basic front-end skills (JavaScript, HTML, CSS) Excellent communication and problem-solving skills Why Join ChicMic Studios? Global exposure across 16+ modern tech stacks High retention culture and innovation-driven environment Opportunity to work on cutting-edge AI/ML and NLP projects EPF, Earned Leaves, Career Growth Support Hybrid/WFH flexibility for exceptional candidates To Apply: Send your resume to: disha.mehta755@chicmicstudios.in Contact: +91 98759 52834 Website: www.chicmicstudios.in Job Type: Full-time Pay: ₹800,000.00 - ₹1,800,000.00 per year Benefits: Flexible schedule Provident Fund Work from home Education: Bachelor's (Required) Experience: Python: 5 years (Required) AI: 2 years (Required) Language: English (Required) Work Location: In person

Posted 2 days ago

Apply

10.0 years

0 Lacs

Chandigarh, India

On-site

Job Description: 7–10 years of industry experience, with at least 5 years in machine learning roles. Advanced proficiency in Python and common ML libraries: TensorFlow, PyTorch, Scikit-learn. Experience with distributed training, model optimization (quantization, pruning), and inference at scale. Hands-on experience with cloud ML platforms: AWS (SageMaker), GCP (Vertex AI), or Azure ML. Familiarity with MLOps tooling: MLflow, TFX, Airflow, or Kubeflow; and data engineering frameworks like Spark, dbt, or Apache Beam. Strong grasp of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay). Excellent problem-solving, communication, and documentation skills.

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Key Skills Required: 5+ years of experience in Software Engineering and MLOps Strong development experience on AWS, specifically AWS Sagemaker (mandatory) Experience with MLflow, GitLab, CDK (mandatory) Exposure to AWS Data Zone (preferred) Proficient in at least one general-purpose programming language: Python, R, Scala, Spark Hands-on experience with production-grade development, integration, and support Strong adherence to scalable, secure, and reliable application development best practices Candidate should have a strong analytical mindset and contribute to MLOps research initiatives

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

Remote

Bizoforce is actively hiring an experienced AI/ML Engineer to join our innovative team and work on cutting-edge solutions in Generative AI, LLMs, and multi-agent architectures. This is a fully remote role based in India, ideal for professionals passionate about AI innovation and real-world deployment. You’ll be contributing to advanced applications such as LLM-based tutoring systems, OCR-powered tools, AI content generators, and data-driven assistants in EdTech, enterprise, and healthcare domains. A clinical background is a plus but not required. Key Responsibilities: * Design, develop, and deploy scalable LLM-based systems, RAG pipelines, and Generative AI applications * Engineer structured prompts for reliable outputs using zero-shot, CoT, few-shot, and meta-prompting methods * Build and integrate multi-modal AI models (text, vision, OCR, etc.) using frameworks like Crew AI, LangGraph * Develop and manage backend systems using Python (FastAPI/Flask), with Redis, PostgreSQL, MongoDB * Collaborate with cross-functional teams for full ML lifecycle (data, model, API, deployment, optimization) * Use tools like Docker, GitHub Actions (CI/CD), and deploy on AWS, Azure, or GCP environments * Participate in code reviews, testing, and MLops workflows to ensure high-quality, scalable output Required Skills * 5+ years of experience in AI/ML engineering with strong backend development skills * Expert-level Python programming and experience with FastAPI or Flask * Hands-on experience with LLMs (GPT-4, LLaMA, Claude, Phi3), Generative AI, and Prompt Engineering * Familiarity with frameworks such as LangChain, LangGraph, Crew AI, Smol Agents * Strong understanding of RAG (Retrieval-Augmented Generation), multi-agent AI systems * Experience in Computer Vision (OpenCV, YOLOv8), OCR (DocTR, TrOCR, Mistral OCR) * Vector DBs: Milvus, FAISS, RedisVector; Databases: MongoDB, PostgreSQL * Cloud: AWS (S3, EC2, SageMaker), Azure, GCP; Tools: Docker, Git, GitHub Actions * Bonus: Experience in EdTech or clinical AI solutions (not mandatory)

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

🚀 We're Hiring: Data Scientist (AI/ML | Industrial IoT | Time Series) 📍 Location: Hyderabad 🧠 Experience: 5+ Years Join our AI/ML initiative to predict industrial alarms from complex sensor data in refinery environments. You'll lead the development of predictive models using time series data, maintenance logs, and work in an Expert-in-the-Loop (EITL) setup with domain experts. 🔍 Key Responsibilities: Develop ML models for anomaly detection & alarm prediction from sensor/IoT time series data. Collaborate with domain experts to validate model outputs. Implement data preprocessing, feature engineering & scalable pipelines. Monitor model performance, drift, explainability (SHAP, confidence), and retraining. Contribute to production-grade MLOps workflows. ✅ What You Bring: 5+ yrs experience in Data Science/ML, especially with time series models (LSTM, ARIMA, Autoencoders). Proficiency in Python, ML libraries (scikit-learn, TensorFlow, PyTorch). Hands-on with IoT/sensor data in manufacturing/industrial domains. Experience with MLOps tools (MLflow, SageMaker, Kubeflow). Strong grasp of model interpretability, ETL (Pandas, PySpark, SQL), and cloud deployment. ✨ Bonus Points: Background in oil & gas, SCADA systems, maintenance logs, or industrial control systems. Experience with cloud platforms (AWS/GCP/Azure) and alarm classification standards.

Posted 2 days ago

Apply

2.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

Remote

Role: Machine Learning Engineer - MLOps Job Overview As a Senior Software Development Engineer, Machine Learning (ML) Operations in the Technology & Engineering division, you will be responsible for enabling PitchBook’s Machine Learning teams and practitioners by providing tools that optimize all aspects of the Machine Learning Development Life Cycle (MLDLC). Your work will support projects in a variety of domains, including Generative AI (GenAI), Large Language Models (LLMs), Natural Language Processing (NLP), Classification, and Regression. Team Overview Your team’s goal will be to reduce friction and time-to-business-value for teams building Artificial Intelligence (AI) solutions at PitchBook. You will be essential in helping to build exceptional AI solutions relied upon and used by thousands of PitchBook customers every day. You will work with PitchBook professionals around the world with the collective goal of delighting our customers and growing our business. While demonstrating a growth mindset, you will be expected to continuously develop your expertise in a way that enhances PitchBook’s AI capabilities in a scalable and repeatable manner. You will be able to solve various common challenges faced in the MLDLC while providing technical guidance to less experienced peers. Outline Of Duties And Responsibilities Serve as a force multiplier for development teams by creating golden paths that remove roadblocks and improve ideation and innovation Collaborate with other engineers, product managers, and internal stakeholders in an Agile environment Design and deliver on projects end-to-end with little to no guidance Provide support to teams building and deploying AI applications by addressing common painpoints in the MLDLC Learn constantly and be passionate about discovering new tools, technologies, libraries, and frameworks (commercial and open source), that can be leveraged to improve PitchBook’s AI capabilities Support the vision and values of the company through role modeling and encouraging desired behaviors. Participate in various cross-functional company initiatives and projects as requested. Contribute to strategic planning in a way that ensures the team is building exceptional products that bring real business value. Evaluate frameworks, vendors, and tools that can be used to optimize processes and costs with minimal guidance. Experience, Skills And Qualifications Degree in Computer Science, Information Systems, Machine Learning, or a similar field preferred (or commensurate experience) +2 years of experience in hands-on development of Machine Learning algorithms +2 years of experience in hands-on deployment of Machine Learning services +2 years of experience supporting the entire MLDLC, including post-deployment operations such as monitoring and maintenance +2 years of experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP) Experience with at least 80%: PyTorch, Tensorflow, LangChain, scikit-learn, Redis, Elasticsearch, Amazon SageMaker, Google Vertex AI, Weights & Biases, FastAPI, Prometheus, Grafana, Apache Kafka, Apache Airflow, MLflow, KubeFlow Ability to break large, complex problems into well-defined steps, ensuring iterative development and continuous improvement Experience in cloud-native delivery, with a deep practical understanding of containerization technologies such as Kubernetes and Docker, and the ability to manage these across different regions. Proficiency in GitOps and creation/management of CI/CD pipelines Demonstrated experience building and using SQL/NoSQL databases Demonstrated experience with Python (Java is a plus) and other relevant programming languages and tools. Excellent problem-solving skills with a focus on innovation, efficiency, and scalability in a global context. Strong communication and collaboration skills, with the ability to engage effectively with internal customers across various cultures and regions. Ability to be a team player who can also work independently Experience working across multiple development teams is a plus Working Conditions The job conditions for this position are in a standard office setting. Employees in this position use PC and phone on an on-going basis throughout the day. Limited corporate travel may be required to remote offices or other business meetings and events. Morningstar India is an equal opportunity employer Morningstar’s hybrid work environment gives you the opportunity to work remotely and collaborate in-person each week. We’ve found that we’re at our best when we’re purposely together on a regular basis, at least three days each week. A range of other benefits are also available to enhance flexibility as needs change. No matter where you are, you’ll have tools and resources to engage meaningfully with your global colleagues. I10_MstarIndiaPvtLtd Morningstar India Private Ltd. (Delhi) Legal Entity

Posted 2 days ago

Apply

5.0 years

0 Lacs

Salem, Tamil Nadu, India

On-site

Description / Position Overview This is a key position for our client to help create data-driven technology solutions that will position us as the industry leader in healthcare, financial, and clinical administration. This hands-on Lead Data Scientist role will focus on building and implementing machine learning models and predictive analytics solutions that will drive the new wave of AI-powered innovation in healthcare. You will be the lead data science technologist responsible for developing and implementing a multitude of ML/AI products from concept to production, helping us gain a competitive advantage in the market. Alongside our Director of Data Science, you will work at the intersection of healthcare, finance, and cutting-edge data science to solve some of the industry's most complex challenges. This is a greenfield opportunity within VHT’s Product Transformation division, where you'll build groundbreaking machine learning capabilities from the ground up. You'll have the chance to shape the future of VHT’s data science & analytics foundation while working with cutting-edge tools and methodologies in a collaborative, innovation-driven environment. Key Responsibilities As the Lead Data Scientist, your role will require you to work closely with subject matter experts in clinical and financial administration across practices, health systems, hospitals, and payors. Your machine learning projects will span the entire healthcare revenue cycle - from clinical encounters through financial transaction completion, extending into back-office operations and payer interactions. You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: • Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood • Claim Denials Prediction - identifying high-risk claims before submission • Payment Amount Prediction - forecasting expected reimbursement amounts • Cash Flow Forecasting - predicting revenue timing and patterns • Patient-Related Models - enhancing patient financial experience and outcomes • Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment • Cloud Platform : AWS (SageMaker, S3, Redshift, EC2) • Development Tools : Jupyter Notebooks, Git, Docker • Programming : Python, SQL, R (optional) • ML/AI Stack : Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow • Data Processing : Spark, Pandas, NumPy • Visualization : Matplotlib, Seaborn, Plotly, Tableau Required Qualifications • Advanced degree in Data Science, Statistics, Computer Science, Mathematics, or a related quantitative field • 5+ years of hands-on data science experience with a proven track record of deploying ML models to production • Expert-level proficiency in SQL and Python , with extensive experience using standard Python machine learning libraries (scikit-learn, pandas, numpy, matplotlib, seaborn, etc.) • Cloud platform experience, preferably AWS, with hands-on knowledge of SageMaker, S3, Redshift, and Jupyter Notebook workbenches (other cloud environments acceptable) • Strong statistical modeling and machine learning expertise across supervised and unsupervised learning techniques • Experience with model deployment, monitoring, and MLOps practices • Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications • US Healthcare industry experience , particularly in Health Insurance and/or Medical Revenue Cycle Management • Experience with healthcare data standards (HL7, FHIR, X12 EDI) • Knowledge of healthcare regulations (HIPAA, compliance requirements) • Experience with deep learning frameworks (TensorFlow, PyTorch) • Familiarity with real-time streaming data processing • Previous leadership or mentoring experience

Posted 2 days ago

Apply

5.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Designation - Sr. AI Architect Experience - 5+ years Location - Vadodara (On site) Role overview : As a Senior AI Architect Engineer, you will be instrumental in designing and orchestrating advanced AI solutions that integrate seamlessly into our business processes and technology ecosystem. Your role will focus on creating scalable, intelligent architectures that drive innovation, operational efficiency, and business transformation. By aligning AI strategy with organizational goals, you will ensure our AI initiatives are robust, adaptable, and positioned to meet future challenges, enabling a competitive advantage in an evolving market. Expertise and experience ● Minimum of 5 years of experience in architecture, machine learning engineering, or data science, with a focus on delivering enterprise-grade AI solutions. ● Proven track record in designing, deploying, and operationalizing AI/ML models in production environments. ● Deep familiarity with AI/ML frameworks, and experience with cloud AI platforms like AWS SageMaker, Azure AI, or Google AI Platform. ● Strong expertise in data engineering, data modelling, and integration, with the ability to architect end-to-end AI pipelines. ● Proficiency in programming languages including Python, SQL, and experience with containerization (Docker) and orchestration (Kubernetes). ● Experience with MLOps practices including CI/CD pipelines, model monitoring, and automated retraining. ● Solid understanding of AI ethics, data privacy, and regulatory compliance. 2/6 Technical Leadership ● Lead the design and implementation of scalable AI architectures that solve complex business problems and integrate with existing systems. ● Define and enforce AI development standards, best practices, and governance frameworks across projects. ● Drive technology evaluation and selection to ensure the adoption of the most effective AI tools and platforms. ● Implement robust data security, privacy controls, and ethical AI guidelines within all AI initiatives. ● Ensure model accuracy, reliability, and maintainability through rigorous validation and continuous monitoring. ● Foster innovation by integrating emerging AI technologies and methodologies into business solutions. ● Mentor and guide cross-functional teams, promoting AI literacy and excellence. ● Architect and maintain cloud-native AI infrastructure to support scalability and agility. ● Champion automation and orchestration to streamline AI workflows and deployment processes.

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at Setu, you will have the opportunity to be a part of a team that is revolutionizing the fintech landscape. Setu believes in empowering every company to become a fintech company by providing them with cutting-edge APIs. The Data Science team at Setu is dedicated to understanding the vast population of India and creating solutions for various fintech sectors such as personal lending, collections, PFM, and BBPS. In this role, you will have the unique opportunity to delve deep into the business objectives and technical architecture of multiple companies, leading to a customer-centric approach that fosters innovation and delights customers. The learning potential in this role is immense, with the chance to explore, experiment, and build critical, scalable, and high-value use cases. At Setu, innovation is not just a goal; it's a way of life. The team is constantly pushing boundaries and introducing groundbreaking methods to drive business growth, enhance customer experiences, and streamline operational processes. From computer vision to natural language processing and Generative AI, each day presents new challenges and opportunities for breakthroughs. To excel in this role, you will need a minimum of 2 years of experience in Data Science and Machine Learning. Strong knowledge in statistics, tree-based techniques, machine learning, inference, hypothesis testing, and optimizations is essential. Proficiency in Python programming, building Data Pipelines, feature engineering, pandas, sci-kit-learn, SQL, and familiarity with TensorFlow/PyTorch are also required. Experience with deep learning techniques and understanding of DevOps/MLOps will be a bonus. Setu offers a dynamic and inclusive work environment where you will have the opportunity to work closely with the founding team who built and scaled public infrastructure such as UPI, GST, and Aadhaar. The company is dedicated to your growth and provides various benefits such as access to a fully stocked library, tickets to conferences, learning sessions, and development allowance. Additionally, Setu offers comprehensive health insurance, access to mental health counselors, and a beautiful office space designed to foster creativity and collaboration. If you are passionate about making a tangible difference in the fintech landscape, Setu offers the perfect platform to contribute to financial inclusion and improve millions of lives. Join us in our audacious mission and obsession with craftsmanship in code as we work together to build infrastructure that directly impacts the lives of individuals across India.,

Posted 2 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

Remote

AI / Generative AI Engineer Location: Remote ( Pan India ). Job Type: Fulltime. NOTE: "Only immediate joiners or candidates with a notice period of 15 days or less will be We are seeking a highly skilled and motivated AI/Generative AI Engineer to join our innovative team. The ideal candidate will have a strong background in designing, developing, and deploying artificial intelligence and machine learning models, with a specific focus on cutting-edge Generative AI technologies. This role requires hands-on experience with one or more major cloud platforms (Google Cloud Platform GCP, Amazon Web Services AWS) and/or modern data platforms (Databricks, Snowflake). You will be instrumental in building and scaling AI solutions that drive business value and transform user experiences. Key Responsibilities Design and Development : Design, build, train, and deploy scalable and robust AI/ML models, including traditional machine learning algorithms and advanced Generative AI models (e.g., Large Language Models LLMs, diffusion models). Develop and implement algorithms for tasks such as natural language processing (NLP), text generation, image synthesis, speech recognition, and forecasting. Work extensively with LLMs, including fine-tuning, prompt engineering, retrieval-augmented generation (RAG), and evaluating their performance. Develop and manage data pipelines for data ingestion, preprocessing, feature engineering, and model training, ensuring data quality and integrity. Platform Expertise Leverage cloud AI/ML services on GCP (e.g., Vertex AI, AutoML, BigQuery ML, Model Garden, Gemini), AWS (e.g., SageMaker, Bedrock, S3), Databricks, and/or Snowflake to build and deploy solutions. Architect and implement AI solutions ensuring scalability, reliability, security, and cost-effectiveness on the chosen platform(s). Optimize data storage, processing, and model serving components within the cloud or data platform ecosystem. MLOps And Productionization Implement MLOps best practices for model versioning, continuous integration/continuous deployment (CI/CD), monitoring, and lifecycle management. Deploy models into production environments and ensure their performance, scalability, and reliability. Monitor and optimize the performance of AI models in production, addressing issues related to accuracy, speed, and resource utilization. Collaboration And Innovation Collaborate closely with data scientists, software engineers, product managers, and business stakeholders to understand requirements, define solutions, and integrate AI capabilities into applications and workflows. Stay current with the latest advancements in AI, Generative AI, machine learning, and relevant cloud/data platform technologies. Lead and participate in the ideation and prototyping of new AI applications and systems. Ensure AI solutions adhere to ethical standards, responsible AI principles, and regulatory requirements, addressing issues like data privacy, bias, and fairness. Documentation And Communication Create and maintain comprehensive technical documentation for AI models, systems, and processes. Effectively communicate complex AI concepts and results to both technical and non-technical audiences. Required Qualifications 8+ years of experience with software development in one or more programming languages, and with data structures/algorithms/Data Architecture. 3+ years of experience with state of the art GenAI techniques (e.g., LLMs, Multi-Modal, Large Vision Models) or with GenAI-related concepts (language modeling, computer vision). 3+ years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging). Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related technical field. Proven experience as an AI Engineer, Machine Learning Engineer, or a similar role. Strong programming skills in Python. Familiarity with other languages like Java, Scala, or R is a plus. Solid understanding of machine learning algorithms (supervised, unsupervised, reinforcement learning), deep learning concepts (e.g., CNNs, RNNs, Transformers), and statistical modeling. Hands-on experience with developing and deploying Generative AI models and techniques, including working with Large Language Models (LLMs like GPT, BERT, LLaMA, etc.). Proficiency in using common AI/ML frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, Keras, Hugging Face Transformers, LangChain, etc. Demonstrable experience with at least one of the following cloud/data platforms: GCP: Experience with Vertex AI, BigQuery ML, Google Cloud Storage, and other GCP AI/ML services. AWS: Experience with SageMaker, Bedrock, S3, and other AWS AI/ML services. Databricks: Experience building and scaling AI/ML solutions on the Databricks Lakehouse Platform, including MLflow. Snowflake: Experience leveraging Snowflake for data warehousing, data engineering for AI/ML workloads, and Snowpark. Experience with data engineering, including data acquisition, cleaning, transformation, and building ETL/ELT pipelines. Knowledge of MLOps tools and practices for model deployment, monitoring, and management. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. (ref:hirist.tech)

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Senior Robotic Process Automation (RPA) Developer for Digital Transformation, you will be responsible for designing, developing, and testing the automation of workflows. Your key role will involve supporting the implementation of RPA solutions, collaborating with the RPA Business Analyst to document process details, and working with the engagement team to implement and test solutions while managing exceptions. Additionally, you will be involved in the maintenance and change control of existing artifacts. To excel in this role, you should possess substantial experience in standard concepts, practices, technologies, tools, and methodologies related to Digital Transformations, including automation, analytics, and new/emerging technologies in AI/ML. Your ability to efficiently manage projects from inception to completion, coupled with strong execution skills, will be crucial. Knowledge of process reengineering would be advantageous. Your responsibilities will include executing projects on digital transformations, process redesign, and maximizing operational efficiency to identify cost-saving opportunities for the enterprise. You will also interact with Business Partners in India and the USA. Key Job Functions and Responsibilities: - Manage end-to-end execution of digital transformational initiatives - Drive ideation and pilot projects on new/emerging technologies such as AI/ML and predictive analytics - Evaluate multiple tools and select the appropriate technology stack for specific challenges - Collaborate with Subject Matter Experts (SMEs) to document current and future processes - Possess a clear understanding of process discovery and differentiate between RPA and regular automation - Provide guidance on designing "to be" processes for effective automation - Develop RPA solutions following best practices - Consult with internal clients and partners to offer automation expertise - Implement RPA solutions across various platforms (e.g., Citrix, web, Microsoft Office, database, scripting) - Assist in establishing a change management framework for updates - Offer guidance on process design from an automation perspective Qualifications: - Bachelor's/Master's/Engineering degree in IT, Computer Science, Software Engineering, or a relevant field - Minimum of 3-4 years of experience in UiPath - Strong programming skills in Python, SQL, and Pandas - Expertise in at least one popular Python framework (e.g., Django, Flask, or Pyramid) is advantageous - Application of Machine Learning/Deep Learning concepts in cognitive areas such as NLP, Computer Vision, and image analytics is highly beneficial - Proficiency in working with structured/unstructured data, image (OCR)/voice, and descriptive/prescriptive analytics - Excellent organizational and time management skills, with the ability to work independently - Certification in UiPath is recommended - Hands-on experience and deep understanding of AWS tools and technologies like EC2, EMR, ECS, Docker, Lambda, and SageMaker - Enthusiasm for collaborating with team members and other groups in a distributed work model - Willingness to support and learn from teammates while sharing knowledge - Comfortable working in a mid-day shift and remote setup Work Schedule or Travel Requirements: - 2-11 PM IST; 5 days a week,

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies