Jobs
Interviews

1552 Sagemaker Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Us Zelis is modernizing the healthcare financial experience in the United States (U.S.) across payers, providers, and healthcare consumers. We serve more than 750 payers, including the top five national health plans, regional health plans, TPAs and millions of healthcare providers and consumers across our platform of solutions. Zelis sees across the system to identify, optimize, and solve problems holistically with technology built by healthcare experts – driving real, measurable results for clients. Why We Do What We Do In the U.S., consumers, payers, and providers face significant challenges throughout the healthcare financial journey. Zelis helps streamline the process by offering solutions that improve transparency, efficiency, and communication among all parties involved. By addressing the obstacles that patients face in accessing care, navigating the intricacies of insurance claims, and the logistical challenges healthcare providers encounter with processing payments, Zelis aims to create a more seamless and effective healthcare financial system. Zelis India plays a crucial role in this mission by supporting various initiatives that enhance the healthcare financial experience. The local team contributes to the development and implementation of innovative solutions, ensuring that technology and processes are optimized for efficiency and effectiveness. Beyond operational expertise, Zelis India cultivates a collaborative work culture, leadership development, and global exposure, creating a dynamic environment for professional growth. With hybrid work flexibility, comprehensive healthcare benefits, financial wellness programs, and cultural celebrations, we foster a holistic workplace experience. Additionally, the team plays a vital role in maintaining high standards of service delivery and contributes to Zelis’ award-winning culture. Position Overview Zelis is looking for a Senior Data Scientist who will collaborate with our analytics and data engineering teams to collect, analyze, and derive insights from company, client, and publicly available data. The ideal candidate must have strong experience in data mining/data analysis/predictive modeling, using a variety of data tools, building, and implementing models, supervised and unsupervised learning, NLP, feature reduction, cluster analysis and creating/running simulations. They must be comfortable working with a wide range of stakeholders and functional teams. Essential Duties And Functions Understand the business objective and directly participate in the delivery of data science solutions to solve business problems. Extract and analyze data: assessing quality, profiling, cleansing, exploratory data analysis and the transformation of large and complex datasets to be utilized for developing the statistical models. Design and develop different statistical modeling techniques such as regression, classification models, anomaly detection models, clustering models, deep learning models and feature reduction etc. to derive actionable insights. Build, test, validate models through various relevant methodologies, error metrics and calibration techniques. Validate the post-production model performance and calculate the ROI Work with cloud analytic platforms on AWS/ Azure using PySpark, Sagemaker, Snowflake, Azure Synapse Analytics, etc. Perform multiple tasks and deal with changing deadline requirements. This includes knowing when to escalate issues. Maintain a focused, flexible, organized, proactive and positive behavior and approach. Monitor the projects of Junior Data Scientists, and mentor and provide them guidance when needed. Proactively provide recommendations to the business based on the insights derived from data science modeling techniques to resolve business problems. Communicate data science models’ complex results and the insights to the non-technical audiences. Interact with cross-functional technical teams and multiple business stakeholders to support integration of data science solutions into the business processes. Experience, Qualifications, Knowledge, And Skills Advanced degree in data science, statistics, computer science, or equivalent with a background in statistics. Experience with healthcare and/or insurance data is a plus. Proficiency in SQL, Python/R, NLP and LLM 3-5 years of relevant work experience including 5 years of experience in developing algorithms using data science technologies to evaluate data scenarios and future outcomes. Competent in machine learning principles and techniques. Experience with cloud-based data and AI solutions. Familiarity with collaboration environments (e.g. Jupyter notebooks,gitlab ,github etc)

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Company Description At MindWise, we are dedicated to revolutionizing the US healthcare industry through cutting-edge IT solutions. We provide tailored technology services that empower healthcare organizations to deliver better patient outcomes, enhance operational efficiency, and drive innovation. Our offerings include custom software development, healthcare IT consulting, and advanced healthcare analytics. Our team comprises experienced professionals who intimately understand the challenges and nuances of the US healthcare sector, and we believe in forging strong partnerships with our clients to deliver solutions that exceed expectations. Role Description This is a full-time remote role for an MLOps Engineer. The MLOps Engineer will be responsible for implementing and managing machine learning pipelines and infrastructure. Day-to-day tasks will include developing and maintaining scalable architecture for data processing and model deployment, collaborating with data scientists to optimize model performance, and ensuring the reliability and efficiency of machine learning solutions. The role also involves managing cloud-based resources and ensuring compliance with security and data protection standards. Key Responsibilities Cloud Infrastructure Management o Design, deploy, and maintain AWS resources (EC2, ECS, Elastic Beanstalk, Lambda, VPC, VPN) o Implement infrastructure-as-code using Terraform and Docker for consistent and reproducible deployments o Optimize cost, performance, and security of compute and storage solutions Database & Server Architecture o Manage production-grade RDS MySQL instances with high availability, security, and backups o Design scalable server-side infrastructure and ensure tight integration with Django-based services Job Scheduling & Data Pipelines o Build and monitor asynchronous task workflows with Celery, SQS, and SNS o Manage data processing pipelines, ensuring timely and accurate job execution and messaging Monitoring & Logging o Set up and maintain CloudWatch dashboards, alarms, and centralized logging for proactive incident detection and resolution Machine Learning & NLP Infrastructure o Support deployment of NLP models on SageMaker and Bedrock, and manage interaction with vector databases and LLMs o Assist in productionizing model endpoints, workflows, and monitoring pipelines CI/CD & Automation o Maintain and improve CI/CD pipelines using CircleCI o Ensure automated testing, deployment, and rollback strategies are reliable and efficient Healthcare Data Integration o Support ingestion and transformation of clinical data using HL7 standards, Mirth Connect, and Java-based parsing tools o Enforce data security and compliance best practices in handling PHI and other sensitive healthcare data Qualifications • 5+ years of experience in cloud infrastructure (preferably AWS) • Strong command of Python/Django and container orchestration using Docker • Proficiency with Terraform, infrastructure-as-code best practices • Experience in setting up and managing messaging systems (Celery, SQS, SNS) • Understanding of NLP or ML model operations in production environments • Familiarity with LLM frameworks, vector databases, and SageMaker workflows • Strong CI/CD skills (CircleCI preferred) • Ability to work independently and collaboratively across engineering and data science teams Nice to Have • Exposure to HIPAA compliance, SOC2, or healthcare regulatory requirements • Experience scaling systems in a startup or early-growth environment • Contributions to open-source or community infrastructure projects • Hands-on experience with HL7, Mirth Connect, and Java for healthcare interoperability is a big plus

Posted 1 day ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job description: Job Description Role Purpose The purpose of the role is to design, and architect VLSI and Hardware based products and enable delivery teams to provide exceptional client engagement and satisfaction. ͏ Title: Data Science Architect. Location – Gurgaon / Noida Machine Learning, Data Science, Model Customization [4+ Years] Exp with performing above on cloud services e.g AWS SageMaker and other tools AI/ Gen AI skills: [1 or 2 years] MCP, RAG pipelines, A2A, Agentic / AI Agents Framework – Auto Gen, Lang graph, Lang chain, codeless workflow builders etc Role & Responsibilities Build working POC and prototypes rapidly. Build / integrate AI driven solutions to solve the identified opportunities, challenges. Lead cross – functional teams in identifying and prioritizing key business areas in which AI solutions can result benefits. Proposals to executives and business leaders on broad range of technology, strategy and standard, governance for AI. Work on functional design, process design (flow mapping), prototyping, testing, defining support model in collaboration with Engineering and business leaders. Articulate and document the solutions architecture and lessons learned for each exploration and accelerated incubation. Relevant IT Experience: - 10+ years of relevant IT experience in given technology ͏ No. Performance Parameter Measure 1.Product design, engineering and implementationCSAT, quality of design/ architecture, FTR, delivery as per cost, quality and timeline, POC review and standards2.Capability development% trainings and certifications completed, mentor technical teams, Thought leadership content developed (white papers, Wipro PoVs) ͏ Competencies Client CentricityPassion for ResultsLearning AgilityProblem Solving & Decision MakingEffective communication Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 day ago

Apply

50.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

About The Opportunity Job Type: Permanent Application Deadline: 05 August 2025 Title Senior Analyst - Data Science Department Enterprise Data & Analytics Location Gurgaon Reports To Gaurav Shekhar Level Data Scientist 4 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like you’re part of something bigger. About Your Team Join the Enterprise Data & Analytics team — collaborating across Fidelity’s global functions to empower the business with data-driven insights that unlock business opportunities, enhance client experiences, and drive strategic decision-making. About Your Role As a key contributor within the Enterprise Data & Analytics team, you will lead the development of machine learning and data science solutions for Fidelity Canada. This role is designed to turn advanced analytics into real-world impact—driving growth, enhancing client experiences, and informing high-stakes decisions. You’ll design, build, and deploy ML models on cloud and on-prem platforms, leveraging tools like AWS SageMaker, Snowflake, Adobe, Salesforce etc. Collaborating closely with business stakeholders, data engineers, and technology teams, you’ll translate complex challenges into scalable AI solutions. You’ll also champion the adoption of cloud-based analytics, contribute to MLOps best practices, and support the team through mentorship and knowledge sharing. This is a high-impact role for a hands-on problem solver who thrives on ownership, innovation, and seeing their work directly influence strategic outcomes. About You You have 4–7 years of experience working in data science domain, with a strong track record of delivering advanced machine learning solutions for business. You’re skilled in developing models for classification, forecasting, recommender systems and hands-on with frameworks like Scikit-learn, TensorFlow, or PyTorch. You bring deep expertise in developing and deploying models on AWS SageMaker, strong business problem-solving abilities, and are familiar with emerging GenAI trends. A background in engineering, mathematics, or economics from a Tier 1 institution will be preferred. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Before you apply, make sure that your resumes have access else your resume will NOT be considered. About the Role We are seeking a Senior AI/ML Engineer to design, fine-tune, and deploy large-scale AI/ML systems. You will work on QLoRA-based fine-tuning, VLLM inference, distributed training , and cloud-native deployments on platforms like AWS SageMaker and GCP Vertex AI Agent Builder . The role will also involve developing applied AI systems , including computer vision solutions using YOLO and OpenCV , and building scalable event-driven pipelines leveraging Cloud Pub/Sub . Key Responsibilities Fine-tune and optimize LLMs using QLoRA, PEFT, and Hugging Face Accelerators . Implement VLLM for efficient large-scale inference. Build distributed and parallel training systems (DeepSpeed, Ray, PyTorch DDP). Develop computer vision models using YOLO and OpenCV for real-world applications. Deploy and manage AI/ML models on AWS SageMaker, GCP Vertex AI , and other cloud MLOps platforms . Design and implement event-driven AI pipelines with Cloud Pub/Sub and other messaging systems. Define and track LLM and CV evaluation metrics (accuracy, factuality, hallucination rate, object detection performance). Integrate graph-based LLM tools for knowledge reasoning and multi-agent AI systems . Requirements 6+ years in AI/ML, with 3+ years in LLM and applied computer vision systems . Expertise in Python, PyTorch, Hugging Face, QLoRA/LoRA, VLLM, YOLO, and OpenCV . Experience with distributed systems, AWS/GCP cloud MLOps , and vector databases (FAISS/Milvus) . Familiarity with LangChain/LlamaIndex , agent-based AI frameworks , and event-driven architectures (Cloud Pub/Sub) . Strong track record of deploying AI/ML models into production at scale .

Posted 1 day ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

About Kanerika: Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Awards and Recognitions Kanerika has won several awards over the years, including: CMMI Level 3 Appraised in 2024. Best Place to Work 2022 & 2023 by Great Place to Work®. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA today. NASSCOM Emerge 50 Award in 2014. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture. Recognized for ISO 27701, 27001, SOC2, and GDPR compliances. Featured as Top Data Analytics Services Provider by GoodFirms. Working for us Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees. Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika. Locations We are located in Austin (USA), Singapore, Hyderabad, Indore and Ahmedabad (India). Job Location: Hyderabad, Indore and Ahmedabad (India) Requirements Key Responsibilities: Lead the design and development of AI-driven applications, particularly focusing on RAG-based chatbot solutions. Architect robust solutions leveraging Python and Java to ensure scalability, reliability, and maintainability. Deploy, manage, and scale AI applications using AWS cloud infrastructure, optimizing performance and resource utilization. Collaborate closely with cross-functional teams to understand requirements, define project scopes, and deliver solutions effectively. Mentor team members, providing guidance on best practices in software development, AI methodologies, and cloud deployments. Ensure solutions meet quality standards, including thorough testing, debugging, performance tuning, and documentation. Continuously research emerging AI technologies and methodologies to incorporate best practices and innovation into our products. - Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics or related fields. At least 5 years of professional experience in AI/Machine Learning engineering. Strong programming skills in Python and Java. Demonstrated hands-on experience building Retrieval-Augmented Generation (RAG)-based chatbots or similar generative AI applications. Proficiency in cloud platforms, particularly AWS, including experience with EC2, Lambda, SageMaker, DynamoDB, CloudWatch, and API Gateway. Solid understanding of AI methodologies, including natural language processing (NLP), vector databases, embedding models, and large language model integrations. Experience with leading projects or teams, managing technical deliverables, and ensuring high-quality outcomes. AWS certifications (e.g., AWS Solutions Architect, AWS Machine Learning Specialty). Familiarity with popular AI/ML frameworks and libraries such as Hugging Face Transformers, TensorFlow, PyTorch, LangChain, or similar. Experience in Agile development methodologies. Excellent communication skills, capable of conveying complex technical concepts clearly and effectively. Strong analytical and problem-solving capabilities, with the ability to navigate ambiguous technical challenges. Benefits Employee Benefits 1. Culture: a. Open Door Policy: Encourages open communication and accessibility to management. b. Open Office Floor Plan: Fosters a collaborative and interactive work environment. c. Flexible Working Hours: Allows employees to have flexibility in their work schedules. d. Employee Referral Bonus: Rewards employees for referring qualified candidates. e. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback. 2. Inclusivity and Diversity: a. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce. b. Mandatory POSH training: Promotes a safe and respectful work environment. 3. Health Insurance and Wellness Benefits: a. GMC and Term Insurance: Offers medical coverage and financial protection. b. Health Insurance: Provides coverage for medical expenses. c. Disability Insurance: Offers financial support in case of disability. 4. Child Care & Parental Leave Benefits: a. Company-sponsored family events: Creates opportunities for employees and their families to bond. b. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child. c. Family Medical Leave: Offers leave for employees to take care of family members' medical needs. 5. Perks and Time-Off Benefits: a. Company-sponsored outings: Organizes recreational activities for employees. b. Gratuity: Provides a monetary benefit as a token of appreciation. c. Provident Fund: Helps employees save for retirement. d. Generous PTO: Offers more than the industry standard for paid time off. e. Paid sick days: Allows employees to take paid time off when they are unwell. f. Paid holidays: Gives employees paid time off for designated holidays. g. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one. 6. Professional Development Benefits: a. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development. b. Mentorship Program: Offers guidance and support from experienced professionals. c. Job Training: Provides training to enhance job-related skills. d. Professional Certification Reimbursements: Assists employees in obtaining professional certifications. e. Promote from Within: Encourages internal growth and advancement opportunities.

Posted 1 day ago

Apply

4.0 years

2 - 7 Lacs

Hyderābād

On-site

DESCRIPTION As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. It’s truly Day 1 for our team in AWS. This is your opportunity to be a member of a team that’s building a suite of AWS Apps and Services to tackle a huge new problem space. You’ll be an integral part of testing to test the app build by services services that leverage AWS technologies like SageMaker, Forecast, Athena, QuickSight, Glue, Bedrock, ML and more. As an QA member of the team, you’ll wear many hats. You’ll help design the overall test strategy, test plan, contribute to the product vision, and establish the technology processes and practices that will lay the groundwork for the organization as it grows. An ideal candidate is an experienced Software QA Engineer with a development and/or QA background who can direct the activities of a growing team. The successful candidate should be able to apply QA process, practice and principles to software development and release processes, should apply their experience with a variety of software QA tools to accomplish these processes, as well as to describe requirements for new scripts, tools and automation needed by their team. Responsibilities include defining test strategy and test plans, reviewing them with stakeholders, improving test coverage, reviewing and filling gaps in existing automation, representing the customer, understanding how the customers use the system and including the most relevant end-to-end user scenarios in test plans and automation. Responsibilities: Understanding how all elements of the system software ecosystem work together and developing QA approaches that fit the overall strategy Responsible for development of test strategies and creation of appropriate test harnesses Providing test infrastructure to enable engineering teams to test and own quality of the services. Being a stakeholder of the release to ensure defects are fixed per SLA and end customer experience are protected and improved Development and execution of test plans, monitoring and reporting on test execution and quality metrics Coordinating with offshore Quality Service team on test execution and sign-off A day in the life Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS 4+ years of quality assurance engineering experience Experience in automation testing Experience scripting or coding Experience in manual testing · Experience in at least, one modern programming language such as Python, Java or Perl PREFERRED QUALIFICATIONS Deep hands-on technical expertise Experience with at least one automated test framework like Selenium or Appium or Cypress Experience in gathering test requirements to create detailed test plans and defining quality metrics to measure product quality A deep understanding of automation testing by leading engineers who can write automation scripts/programs that will aid in automated testing Experience working in Supply chain domain Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

50.0 years

7 - 8 Lacs

Gurgaon

On-site

About the Opportunity Job Type: Permanent Application Deadline: 05 August 2025 Title Senior Analyst - Data Science Department Enterprise Data & Analytics Location Gurgaon Reports To Gaurav Shekhar Level Data Scientist 4 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our team and feel like you’re part of something bigger. About your team Join the Enterprise Data & Analytics team — collaborating across Fidelity’s global functions to empower the business with data-driven insights that unlock business opportunities, enhance client experiences, and drive strategic decision-making. About your role As a key contributor within the Enterprise Data & Analytics team, you will lead the development of machine learning and data science solutions for Fidelity Canada. This role is designed to turn advanced analytics into real-world impact—driving growth, enhancing client experiences, and informing high-stakes decisions. You’ll design, build, and deploy ML models on cloud and on-prem platforms, leveraging tools like AWS SageMaker, Snowflake, Adobe, Salesforce etc. Collaborating closely with business stakeholders, data engineers, and technology teams, you’ll translate complex challenges into scalable AI solutions. You’ll also champion the adoption of cloud-based analytics, contribute to MLOps best practices, and support the team through mentorship and knowledge sharing. This is a high-impact role for a hands-on problem solver who thrives on ownership, innovation, and seeing their work directly influence strategic outcomes. About you You have 4–7 years of experience working in data science domain, with a strong track record of delivering advanced machine learning solutions for business. You’re skilled in developing models for classification, forecasting, recommender systems and hands-on with frameworks like Scikit-learn, TensorFlow, or PyTorch. You bring deep expertise in developing and deploying models on AWS SageMaker, strong business problem-solving abilities, and are familiar with emerging GenAI trends. A background in engineering, mathematics, or economics from a Tier 1 institution will be preferred. Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 1 day ago

Apply

6.0 - 8.0 years

22 - 23 Lacs

Indore, Madhya Pradesh, India

On-site

Company Description Optimum Data Analytics is a strategic technology partner delivering reliable turn key AI solutions. Our streamlined approach to development ensures high-quality results and client satisfaction. We bring experience and clarity to organizations, powering every human decision with analytics & AI Our team consists of statisticians, computer science engineers, data scientists, and product managers. With expertise, flexibility, and cultural alignment, we understand the business, analytics, and data management imperatives of your organization. Our goal is to change how AI/ML is approached in the service sector and deliver outcomes that matter. We provide best-in-class services that increase profit for businesses and deliver improved value for customers, helping businesses grow, transform, and achieve their objectives. Job Details Position : ML Engineer Experience : 6-8 years Location : Pune/Indore office Work Mode : Onsite Notice Period : Immediate Joiner – 15 days Job Summary We are looking for highly motivated and experienced Machine Learning Engineers to join our advanced analytics and AI team. The ideal candidates will have strong proficiency in building, training, and deploying machine learning models at scale using modern ML tools and frameworks. Experience with LLMs (Large Language Models) such as OpenAI and Hugging Face Transformers is highly desirable. Key Responsibilities Design, develop, and deploy machine learning models for real-world applications. Implement and optimize end-to-end ML pipelines using PySpark and MLflow. Work with structured and unstructured data using Pandas, NumPy, and other data processing libraries. Train and fine-tune models using scikit-learn, TensorFlow, or PyTorch. Integrate and experiment with Large Language Models (LLMs) such as OpenAI GPT, Hugging Face Transformers, etc. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Monitor model performance and continuously improve model accuracy and reliability. Maintain proper versioning and reproducibility of ML experiments using MLflow. Required Skills Strong programming experience in Python. Solid understanding of machine learning algorithms, model development, and evaluation techniques. Experience with PySpark for large-scale data processing. Proficient with MLflow for tracking experiments and model lifecycle management. Hands-on experience with Pandas, NumPy, and Scikit-learn. Familiarity or hands-on experience with LLMs (e.g., OpenAI, Hugging Face Transformers). Understanding of MLOps principles and deployment best practices. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. Experience in cloud ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) is a plus. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Skills: panda,mlflow,large language models,python,mlops,pytorch,pandas,scikit-learn.,tensorflow,pyspark,scikit-learn,numpy,llms,mlfow,machine learning

Posted 1 day ago

Apply

4.0 years

2 - 6 Lacs

Chennai

On-site

DESCRIPTION As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. It’s truly Day 1 for our team in AWS. This is your opportunity to be a member of a team that’s building a suite of AWS Apps and Services to tackle a huge new problem space. You’ll be an integral part of testing to test the app build by services services that leverage AWS technologies like SageMaker, Forecast, Athena, QuickSight, Glue, Bedrock, ML and more. As an QA member of the team, you’ll wear many hats. You’ll help design the overall test strategy, test plan, contribute to the product vision, and establish the technology processes and practices that will lay the groundwork for the organization as it grows. An ideal candidate is an experienced Software QA Engineer with a development and/or QA background who can direct the activities of a growing team. The successful candidate should be able to apply QA process, practice and principles to software development and release processes, should apply their experience with a variety of software QA tools to accomplish these processes, as well as to describe requirements for new scripts, tools and automation needed by their team. Responsibilities include defining test strategy and test plans, reviewing them with stakeholders, improving test coverage, reviewing and filling gaps in existing automation, representing the customer, understanding how the customers use the system and including the most relevant end-to-end user scenarios in test plans and automation. Responsibilities: Understanding how all elements of the system software ecosystem work together and developing QA approaches that fit the overall strategy Responsible for development of test strategies and creation of appropriate test harnesses Providing test infrastructure to enable engineering teams to test and own quality of the services. Being a stakeholder of the release to ensure defects are fixed per SLA and end customer experience are protected and improved Development and execution of test plans, monitoring and reporting on test execution and quality metrics Coordinating with offshore Quality Service team on test execution and sign-off A day in the life Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. BASIC QUALIFICATIONS 4+ years of quality assurance engineering experience Experience in automation testing Experience scripting or coding Experience in manual testing · Experience in at least, one modern programming language such as Python, Java or Perl PREFERRED QUALIFICATIONS Deep hands-on technical expertise Experience with at least one automated test framework like Selenium or Appium or Cypress Experience in gathering test requirements to create detailed test plans and defining quality metrics to measure product quality A deep understanding of automation testing by leading engineers who can write automation scripts/programs that will aid in automated testing Experience working in Supply chain domain Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

0.0 - 2.0 years

1 - 5 Lacs

Chennai

Remote

Junior Machine Learning Engineer Role Key Responsibilities: Machine Learning Development Support: Assist in designing, developing and deploying ML models and algorithms under the guidance of senior engineers, to tackle client challenges across banking, legal and related sectors. Cloud & MLOps Support: Help implement ML solutions on AWS (with emphasis on Amazon SageMaker). Contribute to building and maintaining CI/CD pipelines using infrastructure-as-code tools such as CloudFormation and Terraform to automate model training and deployment. Algorithm Implementation & Testing: Write clean, efficient Python code to implement ML algorithms and data pipelines. Conduct experiments, evaluate model performance (e.g. accuracy, precision, recall) and document results. Collaboration & Communication: Work closely with data scientists, ML engineers and DevOps teams to integrate models into production. Participate in sprint meetings and client calls, conveying technical updates in clear, concise terms. Quality, Documentation & Compliance: Maintain thorough documentation of data preprocessing steps, model parameters and deployment workflows. Follow data security best practices and ensure compliance with confidentiality requirements for financial and legal data. Required Qualifications & Experience: Education: Bachelor’s degree in Computer Science, Engineering, Data Science or a closely related discipline. Experience: 0–2 years of practical exposure to machine learning or software development—this may include internships, academic projects or early professional roles. Programming & ML Skills: Proficiency in Python (including pandas, NumPy, scikit-learn). Basic understanding of ML concepts and model evaluation techniques. Cloud & DevOps Familiarity: Hands-on coursework or project experience with AWS (preferably SageMaker). Awareness of CI/CD principles and infrastructure-as-code tools (CloudFormation, Terraform). Hybrid Work Skills: Comfortable operating in a hybrid environment—able to collaborate effectively onsite in Chennai and maintain productivity when working remotely. Soft Skills: Strong analytical thinking, problem-solving aptitude and clear written/verbal communication. Demonstrated ability to learn quickly and work in a client-focused setting.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Company Description AppTestify is a leading provider of On-Demand Testing and Digital Engineering Services, delivering scalable solutions for businesses of all sizes. As a renowned software engineering and QA services provider, AppTestify enables faster deployment of superior software. With over 100 Engineers, the company has served more than 120 clients globally. Their expertise includes DevOps, automation testing, API testing, functional testing, mobile testing, Salesforce application testing, and comprehensive digital engineering solutions. Job Title: Machine Learning Engineer Experience: 5 to 8 Years Location: Pune Job Description: We are looking for highly motivated and experienced Machine Learning Engineers to join our advanced analytics and AI team. The ideal candidates will have strong proficiency in building, training, and deploying machine learning models at scale using modern ML tools and frameworks. Experience with LLMs (Large Language Models) such as OpenAI and Hugging Face Transformers is highly desirable. Key Responsibilities: Design, develop, and deploy machine learning models for real-world applications. Implement and optimize end-to-end ML pipelines using PySpark and MLflow . Work with structured and unstructured data using Pandas , NumPy , and other data processing libraries. Train and fine-tune models using scikit-learn , TensorFlow , or PyTorch . Integrate and experiment with Large Language Models (LLMs) such as OpenAI GPT , Hugging Face Transformers , etc. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Monitor model performance and continuously improve model accuracy and reliability. Maintain proper versioning and reproducibility of ML experiments using MLflow. Required Skills: Strong programming experience in Python . Solid understanding of machine learning algorithms , model development, and evaluation techniques. Experience with PySpark for large-scale data processing. Proficient with MLflow for tracking experiments and model lifecycle management. Hands-on experience with Pandas , NumPy , and Scikit-learn . Familiarity or hands-on experience with LLMs (e.g., OpenAI , Hugging Face Transformers ). Understanding of MLOps principles and deployment best practices. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. Experience in cloud ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) is a plus. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Apply directly - https://hrms.apptestify.com/apply/688c7ed095f9ce582a3a224a

Posted 1 day ago

Apply

6.0 - 8.0 years

22 - 23 Lacs

Pune, Maharashtra, India

On-site

Company Description Optimum Data Analytics is a strategic technology partner delivering reliable turn key AI solutions. Our streamlined approach to development ensures high-quality results and client satisfaction. We bring experience and clarity to organizations, powering every human decision with analytics & AI Our team consists of statisticians, computer science engineers, data scientists, and product managers. With expertise, flexibility, and cultural alignment, we understand the business, analytics, and data management imperatives of your organization. Our goal is to change how AI/ML is approached in the service sector and deliver outcomes that matter. We provide best-in-class services that increase profit for businesses and deliver improved value for customers, helping businesses grow, transform, and achieve their objectives. Job Details Position : ML Engineer Experience : 6-8 years Location : Pune/Indore office Work Mode : Onsite Notice Period : Immediate Joiner – 15 days Job Summary We are looking for highly motivated and experienced Machine Learning Engineers to join our advanced analytics and AI team. The ideal candidates will have strong proficiency in building, training, and deploying machine learning models at scale using modern ML tools and frameworks. Experience with LLMs (Large Language Models) such as OpenAI and Hugging Face Transformers is highly desirable. Key Responsibilities Design, develop, and deploy machine learning models for real-world applications. Implement and optimize end-to-end ML pipelines using PySpark and MLflow. Work with structured and unstructured data using Pandas, NumPy, and other data processing libraries. Train and fine-tune models using scikit-learn, TensorFlow, or PyTorch. Integrate and experiment with Large Language Models (LLMs) such as OpenAI GPT, Hugging Face Transformers, etc. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Monitor model performance and continuously improve model accuracy and reliability. Maintain proper versioning and reproducibility of ML experiments using MLflow. Required Skills Strong programming experience in Python. Solid understanding of machine learning algorithms, model development, and evaluation techniques. Experience with PySpark for large-scale data processing. Proficient with MLflow for tracking experiments and model lifecycle management. Hands-on experience with Pandas, NumPy, and Scikit-learn. Familiarity or hands-on experience with LLMs (e.g., OpenAI, Hugging Face Transformers). Understanding of MLOps principles and deployment best practices. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. Experience in cloud ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) is a plus. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Skills: panda,mlflow,large language models,python,mlops,pytorch,pandas,scikit-learn.,tensorflow,pyspark,scikit-learn,numpy,llms,mlfow,machine learning

Posted 1 day ago

Apply

0.0 - 5.0 years

0 Lacs

Pune, Maharashtra

On-site

Position: ML Engineer Experience: 6-8 years Location: Pune/Indore Work Mode: Onsite Key Responsibilities: Design, develop, and deploy machine learning models for real-world applications.• Implement and optimize end-to-end ML pipelines using PySpark and MLflow. Work with structured and unstructured data using Pandas, NumPy, and other data processing libraries. Train and fine-tune models using scikit-learn, TensorFlow, or PyTorch. Integrate and experiment with Large Language Models (LLMs) such as OpenAI GPT, Hugging Face Transformers, etc. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Monitor model performance and continuously improve model accuracy and reliability. Maintain proper versioning and reproducibility of ML experiments using MLflow. Required Skills: Strong programming experience in Python. Solid understanding of machine learning algorithms, model development, and evaluation techniques. Experience with PySpark for large-scale data processing. Proficient with MLflow for tracking experiments and model lifecycle management. Hands-on experience with Pandas, NumPy, and Scikit-learn. Familiarity or hands-on experience with LLMs (e.g., OpenAI, Hugging Face Transformers). Understanding of MLOps principles and deployment best practices. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. Experience in cloud ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) is a plus. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills. Job Type: Full-time Pay: ₹2,200,000.00 per year Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Preferred) Experience: Python: 5 years (Required) MLFlow: 5 years (Required) LLM: 5 years (Required) Work Location: In person

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Machine Learning Engineer Experience - 5-8 yrs Work Mode - Onsite (Pune/Indore) Employment type - Contract Duration - 6 Months ( Can be extended based on performance) Job Description: We are looking for highly motivated and experienced Machine Learning Engineers to join our advanced analytics and AI team. The ideal candidates will have strong proficiency in building, training, and deploying machine learning models at scale using modern ML tools and frameworks. Experience with LLMs (Large Language Models) such as OpenAI and Hugging Face Transformers is highly desirable. Key Responsibilities: Design, develop, and deploy machine learning models for real-world applications. Implement and optimize end-to-end ML pipelines using PySpark and MLflow . Work with structured and unstructured data using Pandas , NumPy , and other data processing libraries. Train and fine-tune models using scikit-learn , TensorFlow , or PyTorch . Integrate and experiment with Large Language Models (LLMs) such as OpenAI GPT , Hugging Face Transformers , etc. Collaborate with cross-functional teams including data engineers, product managers, and software developers. Monitor model performance and continuously improve model accuracy and reliability. Maintain proper versioning and reproducibility of ML experiments using MLflow. Required Skills: Strong programming experience in Python . Solid understanding of machine learning algorithms , model development, and evaluation techniques. Experience with PySpark for large-scale data processing. Proficient with MLflow for tracking experiments and model lifecycle management. Hands-on experience with Pandas , NumPy , and Scikit-learn . Familiarity or hands-on experience with LLMs (e.g., OpenAI , Hugging Face Transformers ). Understanding of MLOps principles and deployment best practices. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Data Science, or a related field. Experience in cloud ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI) is a plus. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills.

Posted 1 day ago

Apply

50.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Data Analyst - AI& Bedrock Experience Required: 6-10yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract to Hire Job Description:- FAS - Data Analyst - AI & Bedrock Specialization About Us: We are seeking a highly experienced and visionary Data Analyst with a deep understanding of artificial intelligence (AI) principles and hands-on expertise with cutting-edge tools like Amazon Bedrock. This role is pivotal in transforming complex datasets into actionable insights, enabling data-driven innovation across our organization. Role Summary: The Lead Data Analyst, AI & Bedrock Specialization, will be responsible for spearheading advanced data analytics initiatives, leveraging AI and generative AI capabilities, particularly with Amazon Bedrock. With 5+ years of experience, you will lead the design, development, and implementation of sophisticated analytical models, provide strategic insights to stakeholders, and mentor a team of data professionals. This role requires a blend of strong technical skills, business acumen, and a passion for pushing the boundaries of data analysis with AI. Key Responsibilities: • Strategic Data Analysis & Insight Generation: o End-to-end data analysis projects, from defining business problems to delivering actionable insights that influence strategic decisions. o Utilize advanced statistical methods, machine learning techniques, and AI-driven approaches to uncover complex patterns and trends in large, diverse datasets. o Develop and maintain comprehensive dashboards and reports, translating complex data into clear, compelling visualizations and narratives for executive and functional teams. • AI/ML & Generative AI Implementation (Bedrock Focus): o Implement data analytical solutions leveraging Amazon Bedrock, including selecting appropriate foundation models (e.g., Amazon Titan, Anthropic Claude) for specific use cases (text generation, summarization, complex data analysis). o Design and optimize prompts for Large Language Models (LLMs) to extract meaningful insights from unstructured and semi-structured data within Bedrock. o Explore and integrate other AI/ML services (e.g., Amazon SageMaker, Amazon Q) to enhance data processing, analysis, and automation workflows. o Contribute to the development of AI-powered agents and intelligent systems for automated data analysis and anomaly detection. • Data Governance & Quality Assurance: o Ensure the accuracy, integrity, and reliability of data used for analysis. o Develop and implement robust data cleaning, validation, and transformation processes. o Establish best practices for data management, security, and governance in collaboration with data engineering teams. • Technical Leadership & Mentorship: o Evaluate and recommend new data tools, technologies, and methodologies to enhance analytical capabilities. o Collaborate with cross-functional teams, including product, engineering, and business units, to understand requirements and deliver data-driven solutions. • Research & Innovation: o Stay abreast of the latest advancements in AI, machine learning, and data analytics trends, particularly concerning generative AI and cloud-based AI services. o Proactively identify opportunities to apply emerging technologies to solve complex business challenges. Required Skills & Qualifications: • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field. • 5+ years of progressive experience as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong portfolio of successful data-driven projects. • Proven hands-on experience with AI/ML concepts and tools, with a specific focus on Generative AI and Large Language Models (LLMs). • Demonstrable experience with Amazon Bedrock is essential, including knowledge of its foundation models, prompt engineering, and ability to build AI-powered applications. • Expert-level proficiency in SQL for data extraction and manipulation from various databases (relational, NoSQL). • Advanced proficiency in Python (Pandas, NumPy, Scikit-learn, etc.) or R for data analysis, statistical modeling, and scripting. • Strong experience with data visualization tools such as Tableau, Power BI, Qlik Sense, or similar, with a focus on creating insightful and interactive dashboards. • Experience with cloud platforms (AWS preferred) and related data services (e.g., S3, Redshift, Glue, Athena). • Excellent analytical, problem-solving, and critical thinking skills. • Strong communication and presentation skills, with the ability to convey complex technical findings to non-technical stakeholders. • Ability to work independently and collaboratively in a fast-paced, evolving environment. Preferred Qualifications: • Experience with other generative AI frameworks or platforms (e.g., OpenAI, Google Cloud AI). • Familiarity with data warehousing concepts and ETL/ELT processes. • Knowledge of big data technologies (e.g., Spark, Hadoop). • Experience with MLOps practices for deploying and managing AI/ML models. Learn about building AI agents with Bedrock and Knowledge Bases to understand how these tools revolutionize data analysis and customer service.

Posted 1 day ago

Apply

1.0 - 7.0 years

0 Lacs

maharashtra

On-site

We are seeking an experienced AI Data Analyst with over 7 years of professional experience, showcasing leadership in tech projects. The ideal candidate will possess a strong proficiency in Python, Machine Learning, AI APIs, and Large Language Models (LLMs). You will have the opportunity to work on cutting-edge AI solutions, including vector-based search and data-driven business insights. Your experience should include: - At least 2 years of hands-on experience as a Data Analyst. - Practical experience of at least 1 year with AI systems such as LLMs, AI APIs, or vector-based search. - 2+ years of experience working with Machine Learning models and solutions. - Strong background of 5+ years in Python programming. - Exposure to vector databases like pgvector and ChromaDB is considered a plus. Key Responsibilities: - Conduct data exploration, profiling, and cleaning on large datasets. - Design, implement, and evaluate machine learning and AI models to address business problems. - Utilize LLM APIs, foundation models, and vector databases to support AI-driven analysis. - Construct end-to-end ML workflows starting from data preprocessing to deployment. - Develop visualizations and dashboards for internal reports and presentations. - Analyze and interpret model outputs, providing actionable insights to stakeholders. - Collaborate with engineering and product teams to implement AI solutions across business processes. Required Skills: Data Analysis: - Work with real-world datasets hands-on for at least 1 year. - Proficiency in Exploratory Data Analysis (EDA), data wrangling, and visualization using tools like Pandas, Seaborn, or Plotly. Machine Learning & AI: - Apply machine learning techniques for at least 2 years (classification, regression, clustering, etc.). - Hands-on experience with AI technologies such as Generative AI, LLMs, AI APIs (e.g., OpenAI, Hugging Face), and vector-based search systems. - Knowledge of model evaluation, hyperparameter tuning, and model selection. - Exposure to AI-driven analysis, including RAG (Retrieval-Augmented Generation) and other AI solution architectures. Programming: - Proficiency in Python programming for at least 3 years, with expertise in libraries like scikit-learn, NumPy, Pandas, etc. - Strong understanding of data structures and algorithms relevant to AI and ML. Tools & Technologies: - Proficiency in SQL/PostgreSQL. - Familiarity with vector databases like pgvector, ChromaDB. - Exposure to LLMs, foundation models, RAG systems, and embedding techniques. - Familiarity with cloud platforms such as AWS, SageMaker, or similar. - Knowledge of version control systems (e.g., Git), REST APIs, and Linux. Good to Have: - Experience with tools like Scrapy, SpaCy, or OpenCV. - Knowledge of MLOps, model deployment, and CI/CD pipelines. - Familiarity with deep learning frameworks like PyTorch or TensorFlow. Soft Skills: - Possess a strong problem-solving mindset and analytical thinking. - Excellent communication skills, able to convey technical information clearly to non-technical stakeholders. - Collaborative, proactive, and self-driven in a fast-paced, dynamic environment. If you meet the above requirements and are eager to contribute to a dynamic team, share your resume with kajal.uklekar@arrkgroup.com. We look forward to welcoming you to our team in Mahape, Navi Mumbai for a hybrid work arrangement. Immediate joiners are preferred.,

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Please Read Carefully Before Applying Do NOT apply unless you have 3+ years of real-world, hands-on experience in the requirements listed below. Do NOT apply if you are not in Delhi or the NCR OR are unwilling to relocate. This is NOT a WFO opportunity. We work 5 days from office, so please do NOT apply if you are looking for hybrid or WFO. About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the world's largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computing-leveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. We're seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. You'll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2 - 8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As the A.I. Lead at Airbus Innovation Centre India & South Asia (AIC), you will play a pivotal role in leading a team of data scientists and data analysts to develop cutting-edge AI products within the aviation industry. With a minimum of 10 years of hands-on experience in AI, you will guide the team in the creation of innovative solutions with a particular focus on Large Language Models (LLM), Computer Vision, and open-source machine learning models. Your expertise in system engineering, especially Model-Based Systems Engineering (MBSE), will be essential in ensuring the successful integration of AI technologies into aviation systems while adhering to safety and regulatory standards. Your responsibilities will include leading and managing the team to design and deploy AI-driven solutions for aviation, collaborating with cross-functional teams to define project goals, and providing technical leadership in selecting and optimizing machine learning models. You will drive the development of AI algorithms to address complex aviation challenges such as predictive maintenance and anomaly detection. Additionally, your role will involve mentoring team members, staying updated with the latest advancements in AI, and communicating project progress effectively to stakeholders. To excel in this role, you should have a proven track record of successfully leading teams in AI product development, a deep understanding of system engineering principles, and extensive experience in designing and optimizing AI models using Python and relevant libraries such as TensorFlow and PyTorch. Effective communication skills, problem-solving abilities, and a data-driven approach to decision-making are crucial for collaborating with cross-functional teams and conveying technical concepts to non-technical stakeholders. A master's or doctoral degree in a relevant field is preferred. This permanent position at Airbus India Private Limited offers a professional level of experience and the opportunity to work in a collaborative and innovative environment. By applying for this role, you consent to Airbus using and storing information about you for monitoring purposes. Airbus is committed to equal opportunities and does not engage in any monetary exchanges during the recruitment process. Flexibility and innovation are encouraged at Airbus, and flexible working arrangements are supported to facilitate a conducive work environment.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

salem, tamil nadu

On-site

This is a key position that will play a pivotal role in creating data-driven technology solutions to establish our client as a leader in healthcare, financial, and clinical administration. As the Lead Data Scientist, you will be instrumental in building and implementing machine learning models and predictive analytics solutions that will spearhead the new era of AI-driven innovation in the healthcare industry. Your responsibilities will involve developing and implementing a variety of ML/AI products, from conceptualization to production, to help the organization gain a competitive edge in the market. Working closely with the Director of Data Science, you will operate at the crossroads of healthcare, finance, and cutting-edge data science to tackle some of the most intricate challenges faced by the industry. This role presents a unique opportunity within VHT's Product Transformation division to create pioneering machine learning capabilities from scratch. You will have the chance to shape the future of VHT's data science & analytics foundation, utilizing state-of-the-art tools and methodologies within a collaborative and innovation-focused environment. Key Responsibilities: - Lead the development of predictive machine learning models for Revenue Cycle Management analytics, focusing on areas such as: - Claim Denials Prediction: identifying high-risk claims before submission - Cash Flow Forecasting: predicting revenue timing and patterns - Patient-Related Models: enhancing patient financial experience and outcomes - Claim Processing Time Prediction: optimizing workflow and resource allocation - Explore emerging areas and integration opportunities, e.g., denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. VHT Technical Environment: - Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) - Development Tools: Jupyter Notebooks, Git, Docker - Programming: Python, SQL, R (optional) - ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow - Data Processing: Spark, Pandas, NumPy - Visualization: Matplotlib, Seaborn, Plotly, Tableau Required Qualifications: - Advanced degree in Data Science, Statistics, Computer Science, Mathematics, or a related quantitative field - 5+ years of hands-on data science experience with a proven track record of deploying ML models to production - Expert-level proficiency in SQL and Python, with extensive experience using standard Python machine learning libraries (scikit-learn, pandas, numpy, matplotlib, seaborn, etc.) - Cloud platform experience, preferably AWS, with hands-on knowledge of SageMaker, S3, Redshift, and Jupyter Notebook workbenches (other cloud environments acceptable) - Strong statistical modeling and machine learning expertise across supervised and unsupervised learning techniques - Experience with model deployment, monitoring, and MLOps practices - Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications: - US Healthcare industry experience, particularly in Health Insurance and/or Medical Revenue Cycle Management - Experience with healthcare data standards (HL7, FHIR, X12 EDI) - Knowledge of healthcare regulations (HIPAA, compliance requirements) - Experience with deep learning frameworks (TensorFlow, PyTorch) - Familiarity with real-time streaming data processing - Previous leadership or mentoring experience,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are looking for a seasoned AWS Data Engineer. Responsibilities Design and implement AI/ML/GenAI models using AWS services such as AWS Bedrock, SageMaker, Comprehend, Rekognition, and others. Strong programming skills in Python, R etc Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn. Knowledge of data preprocessing, feature engineering, and model evaluation techniques. Develop and deploy generative AI solutions to solve complex business problems and improve operational efficiency. Collaborate with data scientists, engineers, and product teams to understand requirements and translate them into technical solutions. Optimize and fine-tune machine learning models for performance and scalability. Ensure the security, reliability, and scalability of AI/ML solutions by adhering to best practices. Maintain and update existing AI/ML models to ensure they meet evolving business needs. Stay up-to-date with the latest advancements in AI/ML and GenAI technologies and integrate relevant innovations into our solutions. Provide technical guidance and mentorship to junior developers and team members. Excellent problem-solving skills and ability to work in a fast-paced, collaborative environment. Good to have AWS Certified Machine Learning Specialty or other relevant AWS certifications. Mandatory Skill Sets (AWS, Azure, GCP) services such as GCP BigQuery, Dataform AWS Redshift, Python Preferred Skill Sets Devops Years of experience required: 4-8 Years Education Qualification BE/B.Tech/MBA/MCA/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills AWS Devops, Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline + 27 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Skills & Qualifications Exp - 4-6 years in Real world data analysis Qualification - Master’s/PhD in statistics, M. Tech, M. Pharma Strong foundation in large database analysis, biostatistics, clinical trial, observational research and epidemiology Experience with handling large databases like administrative claims, electronic health records, patient chart review, Ability to manage multiple projects and deliver results under tight timelines Excellent interpersonal skills and analytical thoughts Tool & Platform Expertise Healthcare coding system ICD9 and 10, HCPCS, CPT, NDC etc. Use programming languages and tools such as SAS, R, R-Shiny, SQL, Python, Power BI Familiarity with RWE platforms like AWS, SAGEMAKER, AZURE and data standards like CDISC Core Responsibilities Data Analysis & Interpretation Analyse large datasets from sources like electronic health records, claims databases, and registries Apply statistical methods to assess patient journey, treatment outcomes, and healthcare utilization and capable to provide key insights and takeaways Study Design & Execution Develop protocols, statistical analysis plans, and research proposals Conduct observational studies and retrospective analyses using real-world data Collaboration & Communication Work cross-functionally with medical affairs, epidemiology, health economics, and commercial teams Excellent presentation skills Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.

Posted 2 days ago

Apply

14.0 - 18.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Role : The role involves creating innovative solutions, guiding development teams, ensuring technical excellence, and driving architectural decisions aligned with company policies. The Solution Designer/Tech Lead will be a key technical advisor, collaborating with onshore teams and leadership to deliver high-impact Data and AI/ML projects. Responsibilities : Design and architect Generative AI solutions leveraging AWS services such as Bedrock, S3, PG Vector, Kendra, and SageMaker. Collaborate closely with developers to implement solutions, providing technical guidance and support throughout the development lifecycle. Lead the resolution of complex technical issues and challenges in AI/ML projects. Conduct thorough solution reviews and ensure adherence to best practices and company standards. Navigate governance processes and obtain necessary approvals for initiatives. Make critical architectural and design decisions aligned with organizational policies and industry best practices. Liaise with onshore technical teams, presenting solutions and providing expert analysis on proposed approaches. Conduct technical sessions and knowledge-sharing workshops on AI/ML technologies and AWS services. Evaluate and integrate emerging technologies and frameworks like LangChain into solution designs. Develop and maintain technical documentation, including architecture diagrams and design specifications. Mentor junior team members and foster a culture of innovation and continuous learning. Collaborate with data scientists and analysts to ensure optimal use of data in AI/ML solutions. Coordinate with clients, data users, and key stakeholders to achieve long-term objectives for data architecture. Stay updated on the latest trends and advancements in AI/ML and cloud and data technologies. Key Qualifications and experience: Extensive experience (14-18 years) in software development and architecture, with a focus on AI/ML solutions. Deep understanding of AWS services, particularly those related to AI/ML (Bedrock, SageMaker, Kendra, etc.). Proven track record in designing and implementing data, analytics, repor ting and/or AI/ML solutions. Strong knowledge of data structures, algorithms, and software design patterns. Expertise in data management, analytics, and reporting tools. Proficiency in at least one programming language commonly used in AI/ML (e.g., Python, Java, Scala). Familiarity with DevOps practices and CI/CD pipelines. Understanding of AI ethics, bias mitigation, and responsible AI principles. Basic understanding of data pipelines and ETL processes, with the ability to design and implement efficient data flows for AI/ML models. Experience in working with diverse data types (structured, unstructured, and semi-structured) and ability to preprocess and transform data for use in generative AI applications.

Posted 2 days ago

Apply

0 years

0 Lacs

Andhra Pradesh, India

On-site

Role Summary We are seeking an experienced Senior Generative AI Architect to lead the design, development, and deployment of an AI Gateway that connects generative AI applications (developed in Python and C) with enterprise-grade LLM services such as Azure OpenAI and AWS Bedrock. This role is critical to building a robust, scalable infrastructure that enables seamless communication between Gen AI components and LLM endpoints. The ideal candidate is a hands-on with deep expertise in architecting and productionizing Gen AI applications, and a proven ability to guide engineering teams toward high-quality outcomes. Key Responsibilities Architectural Leadership Design and document scalable, reliable, and maintainable architectures for Gen AI applications. Ensure solutions meet production-grade standards and enterprise requirements. Technical Decision Making Evaluate trade-offs in technology choices, design patterns, and frameworks. Align decisions with Gen AI best practices and software engineering principles. Team Guidance Mentor and guide architects and engineers. Foster a collaborative, innovative, and high-performance development environment. Hands-On Development Actively contribute to prototyping and implementation using C and Python. Drive research and development of core AI Gateway components. Product Development Mindset Build a Responsible And Scalable AI Gateway Considering Cost efficiency Security and compliance Upgradeability Ease of use and integration Required Qualifications Technical Expertise Extensive experience in API-based projects and full lifecycle deployment of Gen AI/LLM applications. Strong hands-on proficiency in C and practical experience with Python. Cloud & DevOps Expertise in Docker, Kubernetes, and OpenShift for containerization and orchestration. Working Knowledge Of Azure AI Services: OpenAI, AI Search, Document Intelligence AWS Services: EKS, SageMaker, Bedrock Security & Access Management Familiarity with Okta for secure identity and access management. LLM & Gen AI Tools Experience with LangChain, LlamaIndex, and OpenAI SDKs in C. Monitoring & Troubleshooting Proven ability to monitor, trace, and debug complex distributed AI systems. Personal Attributes Strong leadership and mentorship capabilities. Excellent communication skills for both technical and non-technical audiences. Problem-solving mindset with attention to detail. Passion for advancing AI technologies in production environments. Preferred Experience Prior leadership in large-scale, production-grade AI initiatives. Experience in enterprise technology projects involving Gen AI.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies