Jobs
Interviews

1552 Sagemaker Jobs - Page 17

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Solid understanding and working experience with AWS cloud platform - fundamentals: AWS (e.g., S3, Lambda, SageMaker AI, EC2, Bedrock Agents, CodePipeline, EKS) Python environment setup, dependency management (Ex. pip, conda ) and API Integrations(API Keys, OAuth) Exposure to NLP, machine learning, or data science projects. Awareness of prompt engineering principles and how LLMs (like GPT, Claude, or LLaMA) are used in real-world applications. Understanding of transformer architecture and how it powers modern NLP. Exposure to AI Code Assist tools

Posted 2 weeks ago

Apply

157.0 years

5 - 6 Lacs

Gurgaon

On-site

You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Are you ready to shine? At Sun Life, we empower you to be your most brilliant self. Who we are? Sun Life is a leading financial services company with 157 years of history that helps our clients achieve lifetime financial security and live healthier lives. We serve millions in Canada, the U.S., Asia, the U.K., and other parts of the world. We have a network of Sun Life advisors, third-party partners, and other distributors. Through them, we’re helping set our clients free to live their lives their way, from now through retirement. We’re working hard to support their wellness and health management goals, too. That way, they can enjoy what matters most to them. And that’s anything from running a marathon to helping their grandchildren learn to ride a bike. To do this, we offer a broad range of protection and wealth products and services to individuals, businesses, and institutions, including: Insurance. Life, health, wellness, disability, critical illness, stop-loss, and long-term care insurance Investments. Mutual funds, segregated funds, annuities, and guaranteed investment products Advice. Financial planning and retirement planning services Asset management. Pooled funds, institutional portfolios, and pension funds With innovative technology, a strong distribution network and long-standing relationships with some of the world’s largest employers, we are today providing financial security to millions of people globally. Sun Life is a leading financial services company that helps our clients achieve lifetime financial security and live healthier lives, with strong insurance, asset management, investments, and financial advice portfolios. At Sun Life, our asset management business draws on the talent and experience of professionals from around the globe. Sun Life Global Solutions (SLGS) With 32 years of operations in the Philippines and 17 years in India, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Our Client Impact strategy is motivated by the need to create an inclusive culture, empowered by highly engaged people. We are entering a new world that focuses on doing purpose driven work. The kind that fills your day with excitement and determination, because when you love what you do, it never feels like work. We want to create an environment where you feel empowered to act and are surrounded by people who challenge you, support you and inspire you to become the best version of yourself. As an employer, we not only want to attract top talent, but we want you to have the best Sun Life Experience. We strive to Shine Together, Make Life Brighter & Shape the Future! What will you do? The role involves creating innovative solutions, guiding development teams, ensuring technical excellence, and driving architectural decisions aligned with company policies. The Solution Designer/Tech Lead will be a key technical advisor, collaborating with onshore teams and leadership to deliver high-impact Data and AI/ML projects. Our engineering career framework helps our engineers to understand the scope, collaborative reach, and levers for impact at every job role and defines the key behaviors and deliverables specific to one’s role and team and plan their career with Sun Life. Key Responsibilities: Design and architect Generative AI solutions leveraging AWS services such as Bedrock, S3, PG Vector, Kendra, and SageMaker. Collaborate closely with developers to implement solutions, providing technical guidance and support throughout the development lifecycle. Lead the resolution of complex technical issues and challenges in AI/ML projects. Conduct thorough solution reviews and ensure adherence to best practices and company standards. Navigate governance processes and obtain necessary approvals for initiatives. Make critical architectural and design decisions aligned with organizational policies and industry best practices. Liaise with onshore technical teams, presenting solutions and providing expert analysis on proposed approaches. Conduct technical sessions and knowledge-sharing workshops on AI/ML technologies and AWS services. Evaluate and integrate emerging technologies and frameworks like LangChain into solution designs. Develop and maintain technical documentation, including architecture diagrams and design specifications. Mentor junior team members and foster a culture of innovation and continuous learning. Collaborate with data scientists and analysts to ensure optimal use of data in AI/ML solutions. Coordinate with clients, data users, and key stakeholders to achieve long-term objectives for data architecture. Stay updated on the latest trends and advancements in AI/ML and cloud and data technologies. Key Experience: Extensive experience (12-15 years) in software development and architecture, with a focus on AI/ML solutions. Deep understanding of AWS services, particularly those related to AI/ML (Bedrock, SageMaker, Kendra, etc.). Proven track record in designing and implementing data, analytics, reporting and/or AI/ML solutions. Strong knowledge of data structures, algorithms, and software design patterns. Expertise in data management, analytics, and reporting tools. Proficiency in at least one programming language commonly used in AI/ML (e.g., Python, Java, Scala). Familiarity with DevOps practices and CI/CD pipelines. Understanding of AI ethics, bias mitigation, and responsible AI principles. Basic understanding of data pipelines and ETL processes, with the ability to design and implement efficient data flows for AI/ML models. Experience in working with diverse data types (structured, unstructured, and semi-structured) and ability to preprocess and transform data for use in generative AI applications. Technical Credentials: AWS, AI/ML Primary Location: Gurgaon Schedule: 12:00 PM to 8:30 PM Job Category: Advanced Analytics Posting End Date: 21/07/2025

Posted 2 weeks ago

Apply

4.0 - 12.0 years

5 - 10 Lacs

Gurgaon

On-site

Senior Manager EXL/SM/1421204 ServicesGurgaon Posted On 14 Jul 2025 End Date 28 Aug 2025 Required Experience 4 - 12 Years Basic Section Number Of Positions 2 Band C2 Band Name Senior Manager Cost Code D010803 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 3500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Banking & Financial Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill POWER BI SQL STRESS TESTING CREDIT RISK Minimum Qualification BTECH Certification No data available Job Description Overview: We are seeking a highly skilled and experienced Credit Risk Strategy Manager to join our dynamic team. The ideal candidate will be responsible for developing and implementing credit risk valuation framework to optimize the risk-reward balance, ensuring the stability and profitability of client portfolio. Responsibilities: Analyze credit data and financial statements to identify trends, patterns, and potential risks. Develop and implement comprehensive credit risk strategies to manage and mitigate risk across various credit products. Conduct stress testing and scenario analysis to assess the impact of economic changes on the credit portfolio. Monitor and report on the performance of credit risk strategies, making adjustments as necessary to achieve desired outcomes. Collaborate with cross-functional teams to design and enhance credit risk forecast and Profit/loss statements Optimize existing models to improve performance and accuracy. Create reports and visualizations to communicate findings to stakeholders. Utilize PowerBI to create interactive and visually compelling dashboards that communicate complex data insights in an easily understandable manner. Provide insights and recommendations to senior management on credit risk issues and strategic initiatives. Lead and mentor a team of credit risk analysts, fostering a culture of continuous improvement and professional development. Qualifications: Educational Background : A degree in Statistics, Mathematics, Computer Science, or a related field. (IIT/NIT preferred) Industry Experience : Minimum of 5 years of experience in credit risk management, with a focus on strategy development. Previous experience of analytics consulting in Banking Domain. Technical Skills : Strong proficiency in SAS , SQL and Python is essential. Hands-on experience with AWS SageMaker, Snowflake, PowerBI . Analytical Skills : Strong analytical and problem-solving skills. Past experience in statistical analysis and financial modeling. Excellent understanding of credit risk principles, regulatory requirements, and industry best practices. Proven ability to develop and implement effective credit risk strategies. Communication Skills : Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at all levels. Leadership experience with a track record of managing and developing high-performing teams. Workflow Workflow Type L&S-DA-Consulting

Posted 2 weeks ago

Apply

5.0 years

1 - 9 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. The Principal Data Scientist is responsible for the creation of analytic and statistical methods, design of predictive models, and the integration of methods and models into commercially available analytic products. Knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics will be used to identify opportunities, explore business questions, and make valuable discoveries for prototype development and product improvement. Primary Responsibilities: Lead team on data science projects to design and implement models and experiments from end to end, including data ingestion and preparation, feature engineering, analysis and modeling, model deployment, performance tracking and documentation Identify ways to improve and extend the analytic methods in our products Conduct hands-on data analysis and predictive analytics on large datasets Effectively communicate complex technical results to business partners Support and drive analytic efforts around machine learning and innovation Work with a great deal of autonomy to find solutions to complex problems Assign work to team members and review their work Subject matter expert for our clients Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: PhD or master’s degree in medical economics, Statistics, Mathematics, Healthcare Informatics, or related healthcare analytics experience 5+ years of experience in healthcare analytics/informatics 3+ years of experience with healthcare predictive modeling and/or machine learning 5+ years of experience with claims episode grouping and/or predictive modeling software Solid knowledge of administrative claims and/or clinical data accessed via large data warehouse environment Experience using claims and/or clinical data in applications such as: risk identification and stratification, cost and utilization reporting, provider measurement, trending, benchmarking, population health and care coordination, quality measurement or clinical outcomes. Experience in big data environments (e.g. Microsoft Azure, AWS, SPARK) Proficiency with SQL, R, Python, and/or other statistical programs Proficiency with MS Office suite (including Excel, Access, Word, and PowerPoint) Solid technical leadership and training skills. Ability to guide the work of others without a direct reporting relationship Solid analytical and problem-solving skills, with attention to detail Proven self-assured, self-motivated and results oriented Proven innovative/creative Proven excellent written and verbal communications skills Proven excellent collaboration and customer service skills Preferred Qualifications: Experience in AWS SageMaker environment Experience with H2O Experience with Scala Experience with marketing analytics At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

India

Remote

Senior DevOps (Azure, Terraform, Kubernetes) Engineer Location: Remote (Initial 2–3 months in Abu Dhabi office, and then remote from India) T ype: Full-time | Long-term | Direct Client Hire Client: Abu Dhabi Government About The Role Our client, UAE (Abu Dhabi) Government, is seeking a highly skilled Senior DevOps Engineer (with skills on Azure, Terraform, Kubernetes, Argo) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud Azure DevOps practices. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps, AKS Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: argo,terraform,kubernetes,azure

Posted 2 weeks ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Solid understanding and working experience with AWS cloud platform - fundamentals: AWS (e.g., S3, Lambda, SageMaker AI, EC2, Bedrock Agents, CodePipeline, EKS) Python environment setup, dependency management (Ex. pip, conda ) and API Integrations(API Keys, OAuth) Exposure to NLP, machine learning, or data science projects. Awareness of prompt engineering principles and how LLMs (like GPT, Claude, or LLaMA) are used in real-world applications. Understanding of transformer architecture and how it powers modern NLP. Exposure to AI Code Assist tools

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Senior Machine Learning Engineer - Recommender Systems Join our team at Thomson Reuters and contribute to the global knowledge economy. Our innovative technology influences global markets and supports professionals worldwide in making pivotal decisions. Collaborate with some of the brightest minds on diverse projects to craft next-generation solutions that have a significant impact. As a leader in providing intelligent information, we value the unique perspectives that foster the advancement of our business and your professional journey. Are you excited about the opportunity to leverage your extensive technical expertise to guide a development team through the complexities of full life cycle implementation at a top-tier company? Our Commercial Engineering team is eager to welcome a skilled Senior Machine Learning Engineer to our established global engineering group. We're looking for someone enthusiastic, an independent thinker, who excels in a collaborative environment across various disciplines, and is at ease interacting with a diverse range of individuals and technological stacks. This is your chance to make a lasting impact by transforming customer interactions as we develop the next generation of an enterprise-wide experience. About the Role: As a Machine Learning Engineer, you will: Spearhead the development and technical implementation of machine learning solutions, including configuration and integration, to fulfill business, product, and recommender system objectives. Create machine learning solutions that are scalable, dependable, and secure. Craft and sustain technical outputs such as design documentation and representative models. Contribute to the establishment of machine learning best practices, technical standards, model designs, and quality control, including code reviews. Provide expert oversight, guidance on implementation, and solutions for technical challenges. Collaborate with an array of stakeholders, cross-functional and product teams, business units, technical specialists, and architects to grasp the project scope, requirements, solutions, data, and services. Promote a team-focused culture that values information sharing and diverse viewpoints. Cultivate an environment of continual enhancement, learning, innovation, and deployment. About You: You are an excellent candidate for the role of Machine Learning Engineer if you possess: At least 3 years of experience in addressing practical machine learning challenges, particularly with Recommender Systems, to enhance user efficiency, reliability, and consistency. A profound comprehension of data processing, machine learning infrastructure, and DevOps/MLOps practices. A minimum of 2 years of experience with cloud technologies (AWS SageMaker, AWS is preferred), including services, networking, and security principles. Direct experience in machine learning and orchestration, developing intricate multi-tenant machine learning products. Proficient Python programming skills, SQL, and data modeling expertise, with DBT considered a plus. Familiarity with Spark, Airflow, PyTorch, Scikit-learn, Pandas, Keras, and other relevant ML libraries. Experience in leading and supporting engineering teams. Robust background in crafting data science and machine learning solutions. A creative, resourceful, and effective problem-solving approach. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 2 weeks ago

Apply

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: postgresql,docker,llms and modern nlp techniques,machine learning,computer vision,tensorflow,scikit-learn,pytorch,llm technologies,python,nlp,aws,ml, ai,sql,ml ops,azure,javascript,software/data engineering,kubernetes,mongodb,python, pytorch/tensorflow, and scikit-learn

Posted 2 weeks ago

Apply

0 years

0 Lacs

Thiruvananthapuram Taluk, India

On-site

We are looking for a versatile and highly skilled Data Analyst / AI Engineer to join our innovative team. This unique role combines the strengths of a data scientist with the capabilities of an AI engineer, allowing you to dive deep into data, extract meaningful insights, and then build and deploy cutting-edge Machine Learning, Deep Learning, and Generative AI models. You will play a crucial role in transforming raw data into strategic assets and intelligent applications. Key Responsibilities: · Data Analysis & Insight Generation: o Perform in-depth Exploratory Data Analysis (EDA) to identify trends, patterns, and anomalies in complex datasets. o Clean, transform, and prepare data from various sources for analysis and model development. o Apply statistical methods and hypothesis testing to validate findings and support data-driven decision-making. o Create compelling and interactive BI dashboards (e.g., Power BI, Tableau) to visualize data insights and communicate findings to stakeholders. · Machine Learning & Deep Learning Model Development: o Design, build, train, and evaluate Machine Learning models (e.g., regression, classification, clustering) to solve specific business problems. o Develop and optimize Deep Learning models, including CNNs for computer vision tasks and Transformers for Natural Language Processing (NLP). o Implement feature engineering techniques to enhance model performance. · Generative AI Implementation: o Explore and experiment with Large Language Models (LLMs) and other Generative AI techniques. o Implement and fine-tune LLMs for specific use cases (e.g., text generation, summarization, Q&A). o Develop and integrate Retrieval Augmented Generation (RAG) systems using vector databases and embedding models. o Apply Prompt Engineering best practices to optimize LLM interactions. o Contribute to the development of Agentic AI systems that leverage multiple tools and models. Required Skills & Experience: o Data Science & Analytics: o Strong proficiency in Python and its data science libraries (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn). o Proven experience with Exploratory Data Analysis (EDA) and statistical analysis. o Hands-on experience developing BI Dashboards using tools like Power BI or Tableau. o Understanding of data warehousing and data lake concepts. o Machine Learning: o Solid understanding of various ML algorithms (e.g., Regression, Classification, Clustering, Tree-based models). o Experience with model evaluation, validation, and hyperparameter tuning. o Deep Learning: o Proficiency with Deep Learning frameworks such as TensorFlow, Keras, or PyTorch . o Experience with CNNs (Convolutional Neural Networks) and computer vision concepts (e.g., OpenCV, object detection). o Familiarity with Transformer architectures for NLP tasks. o Generative AI: o Practical experience with Large Language Models (LLMs). o Understanding and application of RAG (Retrieval Augmented Generation) systems. o Experience with Fine-tuning LLMs and Prompt Engineering. o Familiarity with frameworks like LangChain or LlamaIndex. o Problem-Solving: Excellent analytical and problem-solving skills with a strong ability to approach complex data challenges. Good to Have: o Experience with cloud-based AI/ML services (e.g., Azure ML, AWS SageMaker, Google Cloud AI Platform). o Familiarity with MLOps principles and tools (e.g., MLflow, DVC, CI/CD for models). o Experience with big data technologies (e.g., Apache Spark). Educational Qualification: Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience). Please share your resume to the mail id: careers@appfabs.in

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Location-Pune/Bangalore/Mumbai/Noida/Hyderabad Experiecne-6+ Yrs Mandate skill-GenAI + AWS Position Summary: Looking for a Solution Designer/Lead experienced in GenAI, AWS SageMaker, GenAI Gateway, and Bedrock, to design, implement, and optimize GenAI-based solutions. This role requires expertise in designing workflows for embedding generation, LLM-based responses, and API integrations. Should be well versed with traditional AI/ML Models. The Solution Designer will also evaluate various embedding and LLM / AI models to ensure optimal performance and accuracy for client needs. Key Responsibilities: Solution Design: Architect GenAI/AI solutions on AWS (SageMaker, GenAI Gateway, Bedrock), designing workflows for embedding generation, LLM-based document processing, and storing embeddings in vector databases (e.g., OpenSearch, Pinecone). API Integration: Configure secure API access and environment settings to enable seamless SageMaker/Bedrock integration. Model Evaluation: Assess and select suitable embedding and LLM / AI models to meet specific client requirements, ensuring performance, accuracy, and efficiency. Documentation & Collaboration: Maintain comprehensive documentation, work closely with stakeholders, and provide technical guidance on solution implementation. Skills: Technical Skills: Proficiency in AWS, API integrations, model evaluation, and LLMs. AI/ML Knowledge: Skilled in Traditional AI/ML models, NLP, prompt engineering, pattern recognition. Programming: Expertise in Python, REST APIs, and secure environment configuration. Soft Skills: Strong communication, organization, and problem-solving abilities for effective collaboration and documentation. Domain: Telecom Networks (esp Network Performance Management)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Senior Engineer: Job Title: Senior Machine Learning & Data Engineer We are looking for a highly skilled and experienced Machine Learning & Data Engineer to join our team. This hybrid role blends the responsibilities of a data engineer and a machine learning engineer, with a strong emphasis on Python development. You will be instrumental in designing scalable data pipelines, building and deploying ML/NLP models, and enabling data-driven decision-making across the organization. Key Responsibilities Data Engineering & Infrastructure Design and implement robust ETL pipelines and data integration workflows using SQL, NoSQL, and big data technologies (e.g., Spark, Hadoop). Optimize data storage and retrieval using relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra). Ensure data quality, validation, and governance across systems. Develop and maintain data models and documentation for data flows and architecture. Machine Learning & NLP Build, fine-tune, and deploy ML/NLP models using frameworks like TensorFlow, PyTorch, and Scikit-learn. Apply advanced NLP techniques including Transformers, BERT, and LLM fine-tuning. Implement Retrieval-Augmented Generation (RAG) pipelines using LangChain, LlamaIndex, and vector databases (e.g., FAISS, Milvus). Operationalize ML models using APIs, model registries (e.g., Hugging Face), and cloud services (e.g., SageMaker, Azure ML). Python Development Develop scalable backend services using Python frameworks such as FastAPI, Flask, or Django. Automate data workflows and model training pipelines using Python libraries (e.g., Pandas, NumPy, SQLAlchemy). Collaborate with cross-functional teams to integrate ML solutions into production systems. Collaboration & Communication Work closely with data scientists, analysts, and software engineers in Agile/Scrum teams. Translate business requirements into technical solutions. Maintain clean, well-documented code and contribute to knowledge sharing. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience in both data engineering and machine learning roles. Strong Python programming skills and experience with modern Python libraries and frameworks. Deep understanding of ML/NLP concepts and practical experience with LLMs and RAG architectures. Proficiency in SQL and experience with both SQL and NoSQL databases. Experience with big data tools (e.g., Spark, PySpark) and cloud platforms (AWS, Azure). Familiarity with data visualization tools like Power BI or Tableau. Excellent problem-solving, communication, and collaboration skills. Engineer: Job Title : Automation Engineer Job Description We are seeking a skilled and experienced Automation Engineer to join our team. As a C#/Python Developer, you will play a pivotal role in developing and deploying advanced solutions to drive our Product Test automation. You will collaborate closely with Testers, product managers, and stakeholders to ensure the successful implementation and operation of Automation solutions. The ideal candidate will have a strong background in API development with C# programming and python, with experience in deploying scalable solutions. Responsibilities Design, develop, and maintain core APIs using mainly C#. Collaborate with cross-functional teams to understand requirements and implement API solutions. Create and execute unit tests for APIs to ensure software quality. Identify, analyze, and troubleshoot issues in API development and testing. Continuously improve and optimize API development processes. Document API specifications, procedures, and results. Stay updated with the latest industry trends and technologies in API development. Requirements Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in developing APIs and scripts/apps using C# and python. Knowledge in python is a plus. Experience in using visual studio for development Experience in wireless domain will be a plus Strong understanding of software testing principles and methodologies. Proficiency in C# programming language. Experience with Test Automation tools and best practices Familiarity with CI/CD pipelines and version control systems (e.g., Perforce). Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3078221

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Generative AI Architect Experience: 10+ Years Location: Noida, Mumbai, Pune, Chennai, Gurgaon (Hybrid) Contract Duration: Short Term Work Time: IST Shift Job Purpose We are seeking a highly skilled Generative AI Architect to lead the design, development, and deployment of cutting-edge GenAI solutions across enterprise-grade applications. This role demands deep expertise in large language models (LLMs), prompt engineering, and scalable AI system architecture, along with hands-on experience in MLOps, cloud, and data engineering. Key Responsibilities: Design and implement scalable, secure GenAI solutions using LLMs such as GPT, Claude, LLaMA, or Mistral Architect Retrieval-Augmented Generation (RAG) pipelines using LangChain, LlamaIndex, Weaviate, FAISS, or ElasticSearch Lead prompt engineering and evaluation frameworks for accuracy, safety, and contextual relevance Collaborate with product, engineering, and data teams to integrate GenAI into existing applications and workflows Build reusable GenAI modules like function calling, summarization engines, Q&A bots, and document chat solutions Deploy and optimize GenAI workloads on AWS Bedrock, Azure OpenAI, and Vertex AI Ensure robust monitoring, logging, and observability using Grafana, OpenTelemetry, and Prometheus Apply MLOps practices including CI/CD of AI pipelines, model versioning, validation, and rollback Research and prototype innovations like multi-agent systems, autonomous agents, and fine-tuning methods Implement security best practices, data governance, and compliance protocols such as PII masking, encryption, and audit logs Required Skills & Experience: 8+ years in AI/ML with at least 2–3 years in LLMs or Generative AI Proficient in Python with experience in Transformers (Hugging Face), LangChain, OpenAI SDKs Strong knowledge of vector databases like Pinecone, Weaviate, FAISS, Qdrant Experience working with AWS (SageMaker, Bedrock), Azure (OpenAI), and GCP (Vertex AI) Hands-on expertise in RAG pipelines, summarization, and chat-based applications Familiarity with LLM orchestration frameworks like LangGraph, AutoGen, CrewAI Understanding of MLOps tools: MLflow, Airflow, Docker, Kubernetes, FastAPI Exposure to prompt injection mitigation, hallucination control, and LLMOps practices Ability to evaluate GenAI solutions using BERTScore, BLEU, GPTScore Strong communication skills with experience in architecture leadership and mentoring Preferred (Nice to Have): Experience fine-tuning open-source LLMs (LLaMA, Mistral, Falcon) using LoRA or QLoRA Knowledge of multi-modal AI systems (text-image, voice assistants) Domain-specific LLM knowledge in Healthcare, BFSI, Legal, or EdTech Contributions to published work, patents, or open-source GenAI projects

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

We are looking for a highly motivated and enthusiastic Senior Data Scientist with 5-8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. As a Senior Data Scientist, your key responsibilities will include developing and implementing machine learning models and algorithms. You will work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. It is essential to stay updated with the latest advancements in AI/ML technologies and methodologies and collaborate with cross-functional teams to support various AI/ML initiatives. To qualify for this role, you should have a Bachelor's degree in Computer Science, Data Science, or a related field. A strong understanding of machine learning, deep learning, and Generative AI concepts is required. Preferred skills for this position include experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, and Deep Learning stack using Python. Experience with cloud infrastructure for AI/ML on AWS (Sagemaker, Quicksight, Athena, Glue) is highly desirable. Expertise in building enterprise-grade, secure data ingestion pipelines for unstructured data (ETL/ELT) is a plus. Proficiency in Python, TypeScript, NodeJS, ReactJS, and frameworks like pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy, Glue crawler, ETL, as well as experience with data visualization tools like Matplotlib, Seaborn, and Quicksight, is beneficial. Additionally, knowledge of deep learning frameworks such as TensorFlow, Keras, and PyTorch, experience with version control systems like Git and CodeCommit, and strong knowledge and experience in Generative AI/LLM based development are essential for this role. Experience working with key LLM models APIs (e.g., AWS Bedrock, Azure Open AI/OpenAI) and LLM Frameworks (e.g., LangChain, LlamaIndex), as well as proficiency in effective text chunking techniques and text embeddings, are also preferred skills. Good to have skills include knowledge and experience in building knowledge graphs in production and an understanding of multi-agent systems and their applications in complex problem-solving scenarios. Pentair is an Equal Opportunity Employer that values diversity and believes that a diverse workforce contributes different perspectives and creative ideas, enabling continuous improvement.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess expert-level proficiency in Python and Python frameworks or Java. Additionally, you must have hands-on experience with AWS Development, PySpark, Lambdas, Cloud Watch (Alerts), SNS, SQS, CloudFormation, Docker, ECS, Fargate, and ECR. Your deep experience should cover key AWS services such as Compute (PySpark, Lambda, ECS), Storage (S3), Databases (DynamoDB, Snowflake), Networking (VPC, 53, CloudFront, API Gateway), DevOps/CI-CD (CloudFormation, CDK), Security (IAM, KMS, Secrets Manager), and Monitoring (CloudWatch, X-Ray, CloudTrail). Moreover, you should be proficient in NoSQL Databases like Cassandra, PostgreSQL, and have strong hands-on knowledge of using Python for integrations between systems through different data formats. Your expertise should extend to deploying and maintaining applications in AWS, with hands-on experience in Kinesis streams and Auto-scaling. Designing and implementing distributed systems and microservices, scalability, high availability, and fault tolerance best practices are also key aspects of this role. You should have strong problem-solving and debugging skills, with the ability to lead technical discussions and mentor junior engineers. Excellent communication skills, both written and verbal, are essential. You should be comfortable working in agile teams with modern development practices, collaborating with business and other teams to understand business requirements and work on project deliverables. Participation in requirements gathering and understanding, designing solutions based on available frameworks and code, and experience with data engineering tools or ML platforms (e.g., Pandas, Airflow, SageMaker) are expected. An AWS certification (AWS Certified Solutions Architect or Developer) would be advantageous. This position is based in multiple locations in India, including Indore, Mumbai, Noida, Bangalore, and Chennai. To qualify, you should hold a Bachelor's degree or a foreign equivalent from an accredited institution. Alternatively, three years of progressive experience in the specialty can be considered in lieu of each year of education. A minimum of 8+ years of Information Technology experience is required for this role.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be responsible for owning the full ML stack that is capable of transforming raw dielines, PDFs, and e-commerce images into a self-learning system that can read, reason about, and design packaging artwork. This includes building data-ingestion & annotation pipelines for SVG/PDF to JSON conversion, designing and modifying model heads using technologies such as LayoutLM-v3, CLIP, GNNs, and diffusion LoRAs, training & fine-tuning on GPUs, as well as shipping inference APIs and evaluation dashboards. Your daily tasks will involve close collaboration with packaging designers and a product manager, establishing you as the technical authority on all aspects of deep learning within this domain. Your key responsibilities will be divided into three main areas: **Area Tasks:** - Data & Pre-processing (40%): Writing robust Python scripts for parsing PDF, AI, SVG files, extracting text, colour separations, images, and panel polygons. Implementing tools like Ghostscript, Tesseract, YOLO, and CLIP pipelines. Automating synthetic-copy generation for ECMA dielines and maintaining vocabulary YAMLs & JSON schemas. - Model R&D (40%): Modifying LayoutLM-v3 heads, building panel-encoder pre-train models, adding Graph-Transformer & CLIP-retrieval heads, and running experiments, hyper-param sweeps, ablations to track KPIs such as IoU, panel-F1, colour recall. - MLOps & Deployment (20%): Packaging training & inference into Docker/SageMaker or GCP Vertex jobs, maintaining CI/CD, experiment tracking, serving REST/GraphQL endpoints, and implementing an active-learning loop for designer corrections. **Must-Have Qualifications:** - 5+ years of Python experience and 3+ years of deep-learning experience with PyTorch, Hugging Face. - Hands-on experience with Transformer-based vision-language models and object-detection pipelines. - Proficiency in working with PDF/SVG tool-chains, designing custom heads/loss functions, and fine-tuning pre-trained models on limited data. - Strong knowledge of Linux, GPU, graph neural networks, and relational transformers. - Proficient in Git, code review discipline, and writing reproducible experiments. **Nice-to-Have:** - Knowledge of colour science, multimodal retrieval, diffusion fine-tuning, or packaging/CPG industry exposure. - Experience with vector search tools, AWS/GCP ML tooling, and front-end technologies like Typescript/React. You will own a tool stack including DL frameworks like PyTorch, Hugging Face Transformers, torch-geometric, parsing/CV tools, OCR/detectors, retrieval tools like CLIP/ImageBind, and MLOps tools such as Docker, GitHub Actions, W&B or MLflow. In the first 6 months, you are expected to deliver a data pipeline for converting ECMA dielines and PDFs, a panel-encoder checkpoint, an MVP copy-placement model, and a REST inference service with a designer preview UI. You will report to the Head of AI or CTO and collaborate with a front-end engineer, a product manager, and two packaging-design SMEs.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a GenAI Developer at Vipracube Tech Solutions, you will be responsible for developing and optimizing AI models, implementing AI algorithms, collaborating with cross-functional teams, conducting research on emerging AI technologies, and deploying AI solutions. This full-time role requires 5 to 6 years of experience and is based in Pune, with the flexibility of some work from home. Your key responsibilities will include fine-tuning large language models tailored to marketing and operational use cases, building Generative AI solutions using various platforms like OpenAI (GPT, DALLE, Whisper) and Agentic AI platforms such as LangGraph and AWS Bedrock. You will also be building robust pipelines using Python, NumPy, Pandas, applying traditional ML techniques, handling CI/CD & MLOps, using AWS Cloud Services, collaborating using tools like Cursor, and effectively communicating with stakeholders and clients. To excel in this role, you should have 5+ years of relevant AI/ML development experience, a strong portfolio of AI projects in marketing or operations domains, and a proven ability to work independently and meet deadlines. Join our dynamic team and contribute to creating smart, efficient, and future-ready digital products for businesses and startups.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Solid understanding and working experience with AWS cloud platform - fundamentals: AWS (e.g., S3, Lambda, SageMaker AI, EC2, Bedrock Agents, CodePipeline, EKS) Python environment setup, dependency management (Ex. pip, conda ) and API Integrations(API Keys, OAuth) Exposure to NLP, machine learning, or data science projects. Awareness of prompt engineering principles and how LLMs (like GPT, Claude, or LLaMA) are used in real-world applications. Understanding of transformer architecture and how it powers modern NLP. Exposure to AI Code Assist tools

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Overview We are seeking a highly skilled and experienced Credit Risk Strategy Manager to join our dynamic team. The ideal candidate will be responsible for developing and implementing credit risk valuation framework to optimize the risk-reward balance, ensuring the stability and profitability of client portfolio. Responsibilities Analyze credit data and financial statements to identify trends, patterns, and potential risks. Develop and implement comprehensive credit risk strategies to manage and mitigate risk across various credit products. Conduct stress testing and scenario analysis to assess the impact of economic changes on the credit portfolio. Monitor and report on the performance of credit risk strategies, making adjustments as necessary to achieve desired outcomes. Collaborate with cross-functional teams to design and enhance credit risk forecast and Profit/loss statements Optimize existing models to improve performance and accuracy. Create reports and visualizations to communicate findings to stakeholders. Utilize PowerBI to create interactive and visually compelling dashboards that communicate complex data insights in an easily understandable manner. Provide insights and recommendations to senior management on credit risk issues and strategic initiatives. Lead and mentor a team of credit risk analysts, fostering a culture of continuous improvement and professional development. Qualifications Educational Background: A degree in Statistics, Mathematics, Computer Science, or a related field. (IIT/NIT preferred) Industry Experience: Minimum of 5 years of experience in credit risk management, with a focus on strategy development. Previous experience of analytics consulting in Banking Domain. Technical Skills: Strong proficiency in SAS, SQL and Python is essential. Hands-on experience with AWS SageMaker, Snowflake, PowerBI. Analytical Skills: Strong analytical and problem-solving skills. Past experience in statistical analysis and financial modeling. Excellent understanding of credit risk principles, regulatory requirements, and industry best practices. Proven ability to develop and implement effective credit risk strategies. Communication Skills: Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at all levels. Leadership experience with a track record of managing and developing high-performing teams.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Assistant Vice President, AWS AI Lead! In this role, we are looking for candidates who have relevant years of experience in Text Mining. The Text Mining Scientist (TMS) is expected to play a pivotal bridging role between enterprise database teams, and business /functional resources. At a broad level, the TMS will leverage his/her solutioning expertise to translate the customer&rsquos business need into a techno-analytic problem and appropriately work with database teams to bring large scale text analytic solutions to fruition. The right candidate should have prior experience in developing text mining and NLP solutions using open-source tools. Responsibilities Design and deliver AI/ML/ GenAI solutions using AWS services across one or more industry verticals (CPG, BFSI, etc.). Architect and implement solutions using Amazon Bedrock, SageMaker, Lex, Connect, Amazon Q, and AWS AI toolchains. Apply Generative AI, LLM fine-tuning , RAG architectures to solve client business problems. Lead initiatives in Agentic AI frameworks to solve business use cases. Lead delivery teams through all phases: data preprocessing, model training, hyperparameter tuning, evaluation, and deployment. Develop project blueprints, solution documentation, and delivery roadmaps aligned with client expectations. Drive AI solution integration into production systems in collaboration with digital product and engineering teams. Conduct applied research on LLMs, NLP/NLU, Deep Learning, and publish technical white papers or patents. Engage with C-level and technical stakeholders to align AI strategy with business goals. Stay updated on AI advancements and evangelize best practices and innovation within internal and client ecosystems. Understanding of responsible AI, AI governance, and model interpretability Qualifications we seek in you! Minimum Qualifications / Skills MS in Computer Science, Information systems, or Computer engineering, Systems Engineering with relevant experience in Text Mining / Natural Language Processing (NLP) tools, Data sciences, Big Data and algorithms. Post-Graduation in MBA and Undergraduate degree in any engineering discipline, preferably Computer Science with relevant experience Preferred Qualifications/ Skills Experience in sectors such as CPG, Banking/Finance, or Healthcare with domain-specific AI use cases. Full cycle experience is desirable in at least 1 Large Scale Text Mining/NLP project from creating a business use case, Text Analytics assessment/roadmap, Technology & Analytic Solutioning, Implementation and Change Management, considerable experience in Hadoop including development in map-reduce framework Exposure to text mining platforms like OpenNLP , GATE, UIMA, Lucene. Experience with dialog systems, contact center AI (Amazon Connect/Genesys), or customer support automation. Proven track record of delivering large-scale AI/ML/NLP/ GenAI projects end-to-end. Skilled in project planning, blueprinting solution , stakeholder management, and agile delivery. Experience leading cross-functional teams across geographies and functions. Strong written and verbal communication skills ability to present technical solutions to business stakeholders. Preferred Certifications: AWS certifications: Machine Learning - Specialty, Solutions Architect AWS Certified Data Analytics - Specialty Additional cloud certifications in Azure or Google cloud are advantageous . Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

India

Remote

About ProCogia: We’re a diverse, close-knit team with a common pursuit of providing top-class, end-to-end data solutions for our clients. In return for your talent and expertise, you will be rewarded with a competitive salary, generous benefits, alongwith ample opportunity for personal development. ‘Growth mindset’ is something we seek in all our new hires and has helped drive much of our recent growth across North America. Our distinct approach is to push the limits and value derived from data. Working within ProCogia’s thriving environment will allow you to unleash your full career potential. The core of our culture is maintaining a high level of cultural equality throughout the company. Our diversity and differences allow us to create innovative and effective data solutions for our clients. Our Core Values: Trust, Growth, Innovation, Excellence, and Ownership Location: India (Remote) Time Zone: 12pm to 9pm IST Job Description: We are seeking a Senior MLOps Engineer with deep expertise in AWS CDK, MLOps, and Data Engineering tools to join a high-impact team focused on building reusable, scalable deployment pipelines for Amazon SageMaker workloads. This role combines hands-on engineering, automation, and infrastructure expertise with strong stakeholder engagement skills. You will work closely with Data Scientists, ML Engineers, and platform teams to accelerate ML productization using best-in-class DevOps practices. Key Responsibilities: Design, implement, and maintain reusable CI/CD pipelines for SageMaker-based ML workflows. Develop Infrastructure as Code using AWS CDK for scalable and secure cloud deployments. Build and manage integrations with AWS Lambda, Glue, Step Functions, and OpenTable formats (Apache Iceberg, Parquet, etc.). Support MLOps lifecycle: model packaging, deployment, versioning, monitoring, and rollback strategies. Use GitLab to manage repositories, pipelines, and infrastructure automation. Enable logging, monitoring, and cost-effective scaling of SageMaker instances and jobs. Collaborate closely with stakeholders across Data Science, Cloud Platform, and Product teams to gather requirements, communicate progress, and iterate on infrastructure designs. Ensure operational excellence through well-tested, reliable, and observable deployments. Required Skills: 2+ years of experience in MLOps, with 4+ years of experience in DevOps or Cloud Engineering, ideally with a focus on machine learning workloads. Hands-on experience with GitLab CI Pipelines, artifact scanning, vulnerability checks, and API management. Experience in Continuous Development, Continuous Integration (CI/CD), and Test-Driven Development (TDD). Experience in building microservices and API architectures using FastAPI, GraphQL, and Pydantic. Proficiency in Python v3.6 or higher and experience with Python frameworks such as Pytest. Strong experience with AWS CDK (TypeScript or Python) for IaC. Hands-on experience with Amazon SageMaker, including pipeline creation and model deployment. Solid command over AWS Lambda, AWS Glue, OpenTable formats (like Iceberg/Parquet), and event-driven architectures. Practical knowledge of MLOps best practices: reproducibility, metadata management, model drift, etc. Experience deploying production-grade data and ML systems. Comfortable working in a consulting/client-facing environment, with strong stakeholder management and communication skills Preferred Qualifications: Experience with feature stores, ML model registries, or custom SageMaker containers. Familiarity with data lineage, cost optimization, and cloud security best practices. Background in ML frameworks (TensorFlow, PyTorch, etc.). Education: Bachelor’s or master’s degree in any of the following: statistics, data science, computer science, or another mathematically intensive field. ProCogia is proud to be an equal-opportunity employer. We are committed to creating a diverse and inclusive workspace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: We are conducting an in-person hiring drive for the position of Mlops / Data Science in Pune & Bengaluru on 19th July 2025.Interview Location is mentioned below: Pune – Persistent Systems, Veda Complex, Rigveda-Yajurveda-Samaveda-Atharvaveda Plot No. 39, Phase I, Rajiv Gandhi Information Technology Park, Hinjawadi, Pune, 411057 Bangalore - Persistent Systems, The Cube at Karle Town Center Rd, DadaMastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024 We are looking for an experienced and talented Data Science to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, Mlops Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Mlops, Data Science Job Location: All PSL Location Experience: 5+ Years Job Type: Full Time Employment What You'll Do: Design, build, and manage scalable ML model deployment pipelines (CI/CD for ML). Automate model training, validation, monitoring, and retraining workflows. Implement model governance, versioning, and reproducibility best practices. Collaborate with data scientists, engineers, and product teams to operationalize ML solutions. Ensure robust monitoring and performance tuning of deployed models Expertise You'll Bring: Strong experience with MLOps tools & frameworks (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Proficient in containerization (Docker, Kubernetes). Good knowledge of cloud platforms (AWS, Azure, or GCP). Expertise in Python and familiarity with ML libraries (TensorFlow, PyTorch, scikit-learn). Solid understanding of CI/CD, infrastructure as code, and automation tools. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary Location-Pune/Bangalore/Hyderabad/Noida/Mumbai Experiecne -7+ Yrs. JD- Bachelor’s degree in computer science, engineering, or a related field, or equivalent practical experience with at least 8-10 years of combined experience as a Python and MLOps Engineer or similar roles. Strong programming skills in Python. Proficiency with AWS and/or Azure cloud platforms, including services such as EC2, S3, Lambda, SageMaker, Azure ML, etc. Solid understanding of API programming and integration. Hands-on experience with CI/CD pipelines, version control systems (e.g., git), and code repositories. Knowledge of containerization using Docker, Kubernetes and orchestration tools. Proficiency in creating data visualizations specifically for graphs and networks using tools like Matplotlib, Seaborn, or Plotly. Understanding of data manipulation and analysis using libraries such as Pandas and NumP0079. Problem-solving, analytical expertise, and troubleshooting abilities with attention to details. Demonstrates VACC (Visionary, Catalyst, Architect, Coach) leadership behaviors: Good self-awareness as well as system awareness. Pro-actively asks for and gives feedback. Strives to demonstrate strategic thinking as well as good business and external trends insights. Focuses on outcomes defines and deliver highest pipeline, team, talent, and organizational impact outcomes.

Posted 2 weeks ago

Apply

10.0 years

42 - 49 Lacs

Bengaluru

On-site

We are seeking a Senior Manager / Cloud Infrastructure Architect with deep expertise in Azure and/or GCP, and a strategic understanding of multi-cloud environments. You will lead the design, implementation, and optimization of cloud platforms that power data-driven, AI/GenAI-enabled enterprises. You will drive engagements focused on platform modernization, infrastructure-as-code, DevSecOps, and security-first architectures — aligning with business goals across Fortune 500 and mid-size clients. Key Responsibilities: Cloud Strategy & Architecture Shape and lead enterprise-grade cloud transformation initiatives across Azure, GCP, and hybrid environments. Advise clients on cloud-native, multi-cloud, and hybrid architectures aligned to performance, scalability, cost, and compliance goals. Architect data platforms, Lakehouses, and AI-ready infrastructure leveraging cloud-native services. AI & Data Infrastructure Enablement Design and deploy scalable cloud platforms to support Generative AI, LLM workloads, and advanced analytics. Implement data mesh and lakehouse patterns on cloud using services like Azure Synapse, GCP BigQuery, Databricks, Vertex AI, etc. Required Skills & Experience: 10+ years in cloud, DevOps, or infrastructure roles, with at least 4+ years as a cloud architect or platform engineering leader. Deep knowledge of Azure or GCP services, architecture patterns, and platform ops; multi-cloud experience is a strong plus. Proven experience with Terraform, Terragrunt, CI/CD (Azure DevOps, GitHub Actions, Cloud Build), and Kubernetes. Exposure to AI/ML/GenAI infra needs (GPU setup, MLOps, hybrid clusters, etc.) Familiarity with data platform tools: Azure Synapse, Databricks, BigQuery, Delta Lake, etc. Hands-on with security tools like Vault, Key Vault, Secrets Manager, and governance via policies/IAM. Excellent communication and stakeholder management skills. Preferred Qualifications: Certifications: Azure Solutions Architect Expert, GCP Professional Cloud Architect, Terraform Associate Experience working in AI, Data Science, or Analytics-led organizations or consulting firms. Background in leading engagements in regulated industries (finance, healthcare, retail, etc.) Key Skill: Landing Zone patterns, Data - Data landing zone. Multi-cloud Databricks, Snowflake, Dataproc, Bigquery, Azure HDInsights, AWS Redshift, EMR AI - GenAI platforms, AI Foundry, AWS Bedrock, AWS Sagemaker, GCP Vertex AI. Security. Scalability, Cloud agnostic, Cost efficient, Multi-cloud architecture. Job Type: Full-time Pay: ₹4,200,000.00 - ₹4,900,000.00 per year Schedule: Day shift Work Location: In person

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About the Company Neurealm is a next-gen tech powerhouse born from the merger of GS Lab | GAVS, backed by Kedaara Capital. With a strong focus on AI, data, cybersecurity, and platform engineering, we help 250+ global enterprises across industries modernize and operate smarter. We are the right-sized partner for Engineering, Modernization, and RunOps, blending human intelligence with the latest technologies to help businesses across industries such as Healthcare and Technology make smart progress. Our team members, whom we call "Neuronauts,” thrive in an environment driven by innovation, trust, and continuous learning. We encourage everyone to challenge boundaries and explore the edge of what’s possible. Required Skills: 10+ years of experience in software/data architecture with 3+ years in AI/ML, including hands-on work with generative AI solutions. Proven experience designing and deploying AI workflows using: AWS: Amazon Bedrock, SageMaker, Lambda, DynamoDB, OpenSearch. Azure: Azure OpenAI, Azure ML, Azure Cognitive Services, Cognitive Search. Expertise in RAG pipeline architecture, prompt engineering, and vector database design. Familiarity with tools and frameworks for AI agent orchestration (e.g., LangChain, Semantic Kernel, AWS Agent Framework). Strong understanding of cloud security, identity management (IAM, RBAC), and compliance in enterprise environments. Proficiency in Python and hands-on experience with modern ML libraries and APIs used in AWS and Azure. Experience working with LLMOps tools in cloud environments (e.g., model monitoring, logging, performance tracking). Understanding of fine-tuning strategies, model evaluation, and safety/risk management of GenAI models. Familiarity with serverless architecture, containerization (ECS, AKS), and CI/CD practices in AWS/Azure. Ability to translate business problems into scalable AI solutions with measurable outcomes. Preferred Skills Experience working with LLMOps tools in cloud environments (e.g., model monitoring, logging, performance tracking).

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru

Remote

Who we are: Motive empowers the people who run physical operations with tools to make their work safer, more productive, and more profitable. For the first time ever, safety, operations and finance teams can manage their drivers, vehicles, equipment, and fleet related spend in a single system. Combined with industry leading AI, the Motive platform gives you complete visibility and control, and significantly reduces manual workloads by automating and simplifying tasks. Motive serves more than 100,000 customers – from Fortune 500 enterprises to small businesses – across a wide range of industries, including transportation and logistics, construction, energy, field service, manufacturing, agriculture, food and beverage, retail, and the public sector. Visit gomotive.com to learn more. About the Role: As a Software engineer - Machine Learning, you will be a part of a passionate team whose mission is to bring intelligence to the world's trucks. The team is focused on building technology to understand driving behavior, identify risk factors, and intelligently suggest coachable events that not only improve the fleet safety and potentially save millions of dollars but also contribute to making the roads safer. You will have a unique opportunity to work with a high-caliber and fast-paced team which consists of experienced researchers and engineers in Computer Vision, Machine Learning, Deep Learning, and Robotics with a track record of previous products and top-tier publications. You will play a critical role in building and improving a technology that will be used by millions of trucks. In this role, you will design and implement complex machine-learning systems. You will have the opportunity to build and/or improve ML/computer vision systems. Identify where models and algorithms are failing, debug issues, propose solutions, implement, and deploy them on millions of trucks. You will also get exposure to large-scale ML infra and scaling that facilitates large amounts of data to train, test, and validate computer vision systems. What You'll Do: Evaluate and improve the performance of existing models and algorithms already in production Prototype and implement ML modules for complex AI features Build and optimize CV/ML algorithms for real-time performance so they can run on our embedded platform, i.e., the next-gen AI dashcam Write proficient Python and C++ code to build and improve CV algorithms, ML services, training, model compression, and porting pipelines Collaborate with cross-functional teams such as Embedded, Backend, Frontend, Hardware, QA, and the broader AI team to ensure the development of robust and sustainable AI systems Build automated deployment, validation, and active learning pipelines. What We're Looking For: Bachelor's Degree in Computer Science, Electrical Engineering, or related field. A Master's degree is a plus. 5+ years of machine learning and/or data science experience Solid mathematical foundation in Deep Learning, Machine Learning, and optimization approaches. Strong experience in Python or C++ Experience in the following tools and technologies is a plus. AWS (SageMaker, Lambda, EC2, S3, RDS), CI/CD, Terraform, Docker, and Kubernetes. Prior experience with optimizing and deploying ML models on embedded devices is a strong plus Creating a diverse and inclusive workplace is one of Motive's core values. We are an equal opportunity employer and welcome people of different backgrounds, experiences, abilities and perspectives. Please review our Candidate Privacy Notice here . UK Candidate Privacy Notice here . The applicant must be authorized to receive and access those commodities and technologies controlled under U.S. Export Administration Regulations. It is Motive's policy to require that employees be authorized to receive access to Motive products and technology. #LI-Remote

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies