Jobs
Interviews

1521 Sagemaker Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Greetings from “HCL Software” Is a Product Development Division of HCL Tech!! HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. About Unica Product: - The HCL Unica+ Marketing Platform enables our customers to deliver precision and high-performance Marketing campaigns across multiple channels like social media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. Note: Are you available for a F2F Interview on 2nd August (Saturday) _ Hinjewadi, Pune. We are Seeking a Sr. & Lead Python Developer (Data Science, AI/ML) with +6 Yrs of Strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Qualifications & Skills: - At least 6-12 years. of Python Development Experience with at least 4 years in data science and machine learning. Experience with Customer Data Platforms (CDP) like Treasure Data, Epsilon, Tealium, Adobe, Salesforce is advantageous. Experience with AWS SageMaker is advantegous Experience with Lang Chain, RAG for Generative AI is advantageous. Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Knowledge of REST, JSON, XML, SOAP is a must Ability to work well within an agile team environment and applying the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Responsibilities: - Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms:In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Understanding of how to deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Travel: 30% +/- travel required Location: India (Pune preferred) Compensation: Base salary, plus bonus.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: Persistent is scaling up its global Digital Trust practice. Digital Trust encompasses the domains of Data Privacy, Responsible AI (RAI), GRC (Governance, Risk & Compliance), and other related areas. This is a rapidly evolving domain globally that is at the intersection of technology, law, ethics, and compliance. Team members of this practice get an opportunity to work on innovative and cutting-edge solutions. We are looking for a highly motivated and technically skilled Responsible AI Testing Analyst with 1–3 years of experience to join our Digital Trust team. In this role, you will be responsible for conducting technical testing and validation of AI systems or agents against regulatory and ethical standards, such as the EU AI Act, AI Verify (Singapore), NIST AI RMF, and ISO 42001. This is a technical position requiring knowledge of AI/ML models, testing frameworks, fairness auditing, explainability techniques, and regulatory understanding of Responsible AI. Role: AI Testing Analyst Location: All PSL Location Experience: 1-3 years Job Type: Full Time Employment What You’ll Do: Perform technical testing of AI systems and agents using pre-defined test cases aligned with regulatory and ethical standards. Conduct model testing for risks such as bias, robustness, explainability, and data drift using AI assurance tools or libraries. Support the execution of AI impact assessments and document the test results for internal and regulatory audits. Collaborate with stakeholders to define assurance metrics and ensure adherence to RAI principles. Assist in setting up automated pipelines for continuous testing and monitoring of AI/ML models. Prepare compliance-aligned reports and dashboards showcasing test results and conformance to RAI principles. Expertise You’ll Bring : 1 to 3 years of hands-on experience in AI/ML model testing, validation, or AI assurance roles. Experience with testing AI principles such as fairness, bias detection, robustness, accuracy, explainability, and human oversight. Practical experience with tools like AI Fairness 360, SHAP, LIME, What-If Tool, or commercial RAI platforms Ability to run basic model tests using Python libraries (e.g., scikit-learn, pandas, numpy, tensorflow/keras, PyTorch). Understanding of regulatory implications of high-risk AI systems and how to test for compliance. Strong documentation skills to communicate test findings in an auditable and regulatory-compliant manner. Preferred Certifications (any one or more): AI Verify testing framework training (preferred) IBM AI Fairness 360 Toolkit Certification AI Certification (Google Cloud) – Vertex AI + SHAP/LIME ModelOps/MLOps Monitoring with Bias Detection – AWS SageMaker / Azure ML / GCP Vertex AI TensorFlow Developer / Python for Data Science and AI / Applied Machine Learning in Python Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 4 days ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Gen AI + Agentic AI Developer Experience: 4 to 8 Years Location: Remote Client: Persistent Systems Job Overview: We are hiring experienced and passionate professionals with strong expertise in Generative AI and Agentic AI frameworks. As a Gen AI + Agentic AI Developer, you will be responsible for designing, developing, and deploying intelligent, autonomous agent-based solutions using cutting-edge technologies. You’ll work closely with cross-functional teams to build scalable AI systems powered by modern LLMs and deployed on cloud infrastructure. Key Responsibilities: Design, develop, and deploy Agentic AI solutions using frameworks such as Microsoft Autogen , CrewAI , or Lang Graph . Build robust pipelines and workflows for LLM-based agent orchestration . Integrate third-party APIs, tools, and services to enhance autonomous agent behaviors. Deploy and manage solutions on AWS infrastructure (Lambda, S3, EC2, SageMaker, etc.). Collaborate with Machine Learning Engineers , Data Scientists , and Product Teams to deliver and scale intelligent solutions. Optimize AI agents for performance , efficiency , and robustness . Conduct thorough code reviews , enforce best coding practices, and contribute to architectural decisions. Required Skills & Qualifications: Strong Python development skills with a focus on modular, clean, and maintainable code. Hands-on experience with AWS services : S3, EC2, Lambda, API Gateway, SageMaker. Practical knowledge and implementation experience with at least one Agentic AI framework: Microsoft Autogen CrewAI LangGraph Familiarity with Large Language Models (LLMs) such as OpenAI, Claude, or Mistral. Experience in building multi-agent workflows and managing agent state transitions . Solid understanding of asynchronous programming in Python. Exposure to FastAPI or Flask for API/service deployment. Preferred (Good to Have): Experience with LangChain , RAG architectures , or vector databases . Familiarity with CI/CD pipelines (e.g., GitHub Actions). Experience with Docker and Kubernetes for containerization and orchestration. Understanding of Reinforcement Learning or agent-based simulations .

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1.⁠ ⁠Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2.⁠ ⁠Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3.⁠ ⁠Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4.⁠ ⁠Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5.⁠ ⁠MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6.⁠ ⁠Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7.⁠ ⁠Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8.⁠ ⁠Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9.⁠ ⁠Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10.⁠ ⁠High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11.⁠ ⁠Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12.⁠ ⁠Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost

Posted 4 days ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 days ago

Apply

0 years

0 Lacs

India

On-site

The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Key Prerequisites  Experience in AI and Generative AI Development  Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP).  Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis.  Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT).  Experience in ingestion and cleaning of data  Feature Engineering and Data Engineering  Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets.  Experience in Model Training and Optimization  Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face.  Exposure to optimization of models for performance, scalability, and cost efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI).  Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise  OpenAI, Hugging Face  MLOps tools  Generative AI-specific tools and libraries for innovative applications. Technical Skills 1. Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. 2. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. 3. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). 4. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). 5. Solid understa

Posted 4 days ago

Apply

8.0 - 13.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Job Description What you will do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Proficient in SQL, Python for extracting, transforming, and analyzing complex datasets from relational data stores Proficient in Python with strong experience in ETL tools such as Apache Spark and various data processing packages, supporting scalable data workflows and machine learning pipeline development. Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. Skilled in developing machine learning models using Python, with hands-on experience in deep learning frameworks including PyTorch and TensorFlow. Strong understanding of data governance frameworks, tools, and best practices. Knowledge of vector databases, including implementation and optimization. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

40.0 years

6 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller, and longer. We discover, develop, manufacture, and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what is known today. ABOUT THE ROLE Amgen is seeking a Sr. Associate IS Business Systems Analyst with strong data science and analytics expertise to join the Digital Workplace Experience (DWX) Automation & Analytics product team. In this role, you will develop, maintain, and optimize machine learning models, forecasting tools, and operational dashboards that support strategic and day-to-day decisions for global digital workplace services. This role is ideal for candidates with hands-on experience building predictive models and working with large operational datasets to uncover insights and deliver automation solutions. You will work alongside product owners, engineers, and service leads to deliver measurable business value using data-driven tools and techniques. Roles and Responsibilities Design, develop, and maintain predictive models, decision support tools, and dashboards using Python, R, SQL, Power BI, or similar platforms. Partner with delivery teams to embed data science outputs into business operations, focusing on improving efficiency, reliability, and end-user experience in Digital Workplace services. Build and automate data pipelines for data ingestion, cleansing, transformation, and model training using structured and unstructured datasets. Monitor, maintain, and tune models to ensure accuracy, interpretability, and sustained business impact. Support efforts to operationalize ML models by working with data engineers and platform teams on integration and automation. Conduct data exploration, hypothesis testing, and statistical analysis to identify optimization opportunities across services like endpoint health, service desk operations, mobile technology, and collaboration platforms. Provide ad hoc and recurring data-driven recommendations to improve automation performance, service delivery, and capacity forecasting. Develop reusable components, templates, and frameworks that support analytics and automation scalability across DWX. Collaborate with other data scientists, analysts, and developers to implement best practices in model development and lifecycle management. What we expect of you We are all different, yet we all use our outstanding contributions to serve patients. The vital attribute professional we seek is with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years in Data Science, Computer Science, IT, or related field Must Have Skills Experience working with large-scale datasets in enterprise environments and with data visualization tools such as Power BI, Tableau, or equivalent Strong experience developing models in Python or R for regression, classification, clustering, forecasting, or anomaly detection Proficiency in SQL and working with relational and non-relational data sources Nice-to-Have Skills Familiarity with ML pipelines, version control (e.g., Git), and model lifecycle tools (MLflow, SageMaker, etc.) Understanding of statistics, data quality, and evaluation metrics for applied machine learning Ability to translate operational questions into structured analysis and model design Experience with cloud platforms (Azure, AWS, GCP) and tools like Databricks, Snowflake, or BigQuery Familiarity with automation tools or scripting (e.g., PowerShell, Bash, Airflow) Working knowledge of Agile/SAFe environments Exposure to ITIL practices or ITSM platforms such as ServiceNow Soft Skills Analytical mindset with attention to detail and data integrity Strong problem-solving and critical thinking skills Ability to work independently and drive tasks to completion Strong collaboration and teamwork skills Adaptability in a fast-paced, evolving environment Clear and concise documentation habits EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: Applied AI Engineer – LLMs, LangChain, Agentic Systems 📍 Location: Kochi (first 6 months) → Bangalore (long-term) 🕑 Experience: 2–3 years (must be hands-on) 📡 Mode: Full-time, on-site (hybrid possible after Kochi phase) 🧭 Reports To: AI Innovation Leader About the Role: We’re hiring a hands-on Applied AI Engineer to join a pioneering AI Innovation initiative at Geojit Financial Services, one of India’s most respected and long-standing financial services firms. As part of Geojit’s Vision 2030 strategy, we’re embedding AI across all layers of the business — from transforming customer experience to automating intelligence in operations, wealth, and capital market platforms. This role offers the unique opportunity to work directly with leadership, build real-world AI systems, and be part of the core team shaping the future of intelligent finance in India. What You’ll Work On: As part of the AI Innovation Center of Excellence (CoE), your responsibilities include: 🚀 AI Solution Development Build and deploy LLM-powered agents for real business use cases across customer service, research automation, internal knowledge retrieval, and document intelligence Design agentic AI workflows using LangChain — integrating memory, tools, retrievers, custom functions, and chaining logic Evaluate and integrate Text-to-SQL models that translate natural queries into live database queries Leverage AWS Bedrock and SageMaker pipelines for experimentation, deployment, and orchestration 📊 Data Engineering & EDA: Work with structured and semi-structured data (CSV, SQL, JSON, APIs) Perform Exploratory Data Analysis (EDA) using Plotly, Dash, or Streamlit for internal tools and decision support Assist in creating reusable tools, dashboards, or data layers to support ongoing experimentation 🛠️ Architecture & MLOps :Collaborate with the AI Lead and Infra teams to design scalable MLOps setup s Create or contribute to prompt optimization frameworks, RAG pipelines, evaluation frameworks, and agent monitoring tool s Integrate APIs, vector stores, document retrievers, and internal knowledge base s Required Skills :✅ 2–3 years hands-on experience with :LLMs and GenAI APIs – OpenAI, Cohere, Anthropic, etc . LangChain (must-have) – agents, tools, memory, multi-hop chains, and integration experienc e Agentic AI – experience designing task-based autonomous or semi-autonomous workflow s Text-to-SQL models – understanding of evaluation techniques and production concern s AWS ecosystem – especially SageMaker, Bedrock, Lambda, API Gatewa y Data wrangling & visualization – Pandas, SQL, Plotly/Dash/Streamli t Nice to have :Fintech domain exposur e Participation in AI/ML hackathons or open-source contribution s Familiarity with vector search, embeddings (FAISS, Weaviate, etc. ) Frontend integration knowledge for internal tools (basic Streamlit/Dash/Flask ) About the Project & Culture : You’ll be working in a startup-style team embedded inside a legacy enterprise—delivering quick iterations, visible impact, and cross-functional collaboration with leadership. The goal is to show business value in less than 100 days, and scale AI across :Capital Market s Wealth & Investment Advisor y Mutual Funds & Insurance Distributio n Internal Productivity System s About Geojit :Geojit Financial Services Ltd is a pioneer in India’s capital markets with a 38-year legacy, 1M+ clients, and presence across 500+ offices. With backing from BNP Paribas and KSIDC, Geojit has been a leader in digital innovation :India’s first online trading platform (2000 ) Early movers in mobile trading, Smartfolios, Funds Genie, and Portfolio Managemen t Now entering a new AI-led chapter under the leadership of Jayakrishnan Sasidharan (ex-Adobe, Wipro, TCS ) They are committed to delivering personalized, intelligent, and frictionless experiences to millions of investors — powered by AI, data, and design . Why Join Us ?Work on real GenAI deployments with measurable RO I Direct access to decision-makers, not just product manager s Be part of India’s next Fintech transformation stor y Hands-on exposure to enterprise-scale AI architectur e Build something that millions will use — not just a lab prototyp e 📩 How to Appl ySend your resume via WhatsApp (preferred) or email :📱 WhatsApp: +91 900800002 0📧 Email: sureshaisales@gmail.co m📝 Subject: “Applied AI Role – Geojit ”

Posted 5 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)

Posted 5 days ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

The ideal candidate for this position should have previous experience in building data science/algorithms based products, which would be a significant advantage. Experience in handling healthcare data is also desired. An educational qualification of Bachelors/Masters in computer science/Data Science or related subjects from a reputable institution is required. With a typical experience of 7-9 years in the industry, the candidate should have a strong background in developing data science models and solutions. The ability to quickly adapt to new programming languages, technologies, and frameworks is essential. A deep understanding of data structures and algorithms is necessary. The candidate should also have a proven track record of implementing end-to-end data science modeling projects and providing guidance and thought leadership to the team. Experience in a consulting environment with a hands-on attitude is preferred. As a Data Science Lead, the primary responsibility will be to lead a team of analysts, data scientists, and engineers to deliver end-to-end solutions for pharmaceutical clients. The candidate is expected to participate in client proposal discussions with senior stakeholders and provide technical thought leadership. Expertise in all phases of model development, including exploratory data analysis, hypothesis testing, feature creation, dimension reduction, model training, selection, validation, and deployment, is required. A deep understanding of statistical and machine learning methods such as logistic regression, SVM, decision tree, random forest, neural network, and regression is essential. Mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and their practical implementation to solve business problems is necessary. The candidate should also be able to implement ML models in an optimized and sustainable framework and gain business understanding in the healthcare domain to develop relevant analytics use cases. In terms of technical skills, the candidate should have expert-level proficiency in programming languages like Python/SQL, along with working knowledge of relational SQL and NoSQL databases such as Postgres and Redshift. Extensive knowledge of predictive and machine learning models, NLP techniques, deep learning, and unsupervised learning is required. Familiarity with data structures, pre-processing, feature engineering, sampling techniques, and statistical analysis is important. Exposure to open-source tools, cloud platforms like AWS and Azure, and AI tools like LLM models and visualization tools like Tableau and PowerBI is preferred. If you do not meet every job requirement, the company encourages candidates to apply anyway, as they are dedicated to building a diverse, inclusive, and authentic workplace. Your excitement for the role and potential fit may make you the right candidate for this position or others within the company.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced Python Backend Engineer with a strong background in AWS and AI/ML. Your primary responsibility will be to design, develop, and maintain Python-based backend systems and AI-powered services. You will be tasked with building and managing RESTful APIs using Django or FastAPI for AI/ML model integration. Additionally, you will develop and deploy machine learning and GenAI models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Your expertise in implementing GenAI pipelines using Langchain will be crucial, and experience with LangGraph is considered a strong advantage. You will leverage various AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Collaborating with data scientists, DevOps, and architects to integrate models and workflows into production will be a key aspect of your role. Furthermore, you will be responsible for building and managing CI/CD pipelines for backend and model deployments. Ensuring the performance, scalability, and security of applications in cloud environments will be paramount. Monitoring production systems, troubleshooting issues, and optimizing model and API performance will also fall under your purview. To excel in this role, you must possess at least 5 years of hands-on experience in Python backend development. Your strong experience in building RESTful APIs using Django or FastAPI is essential. Proficiency in AWS cloud services, a solid understanding of ML/AI concepts, and experience with ML libraries are prerequisites. Hands-on experience with Langchain for building GenAI applications and familiarity with DevOps tools and microservices architecture will be beneficial. Additionally, having Agile development experience and exposure to tools like Docker, Kubernetes, Git, Jenkins, Terraform, and CI/CD workflows will be advantageous. Experience with LangGraph, LLMs, embeddings, and vector databases, as well as knowledge of MLOps tools and practices, are considered nice-to-have qualifications. In summary, as a Python Backend Engineer with expertise in AWS and AI/ML, you will play a vital role in designing, developing, and maintaining intelligent backend systems and GenAI-driven applications. Your contributions will be instrumental in scaling backend systems and implementing AI/ML applications effectively.,

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining Salesforce, the Customer Company, known for inspiring the future of business by combining AI, data, and CRM technologies. As part of the Marketing AI/ML Algorithms and Applications team, you will play a crucial role in enhancing Salesforce's marketing initiatives by implementing cutting-edge machine learning solutions. Your work will directly impact the effectiveness of marketing efforts, contributing to Salesforce's growth and innovation in the CRM and Agentic enterprise space. In the position of Lead / Staff Machine Learning Engineer, you will be responsible for developing and deploying ML model pipelines that drive marketing performance and deliver customer value. Working closely with cross-functional teams, you will lead the design, implementation, and operations of end-to-end ML solutions at scale. Your role will involve establishing best practices, mentoring junior engineers, and ensuring the team remains at the forefront of ML innovation. Key Responsibilities: - Define and drive the technical ML strategy, emphasizing robust model architectures and MLOps practices - Lead end-to-end ML pipeline development, focusing on automated retraining workflows and model optimization - Implement infrastructure-as-code, CI/CD pipelines, and MLOps automation for model monitoring and drift detection - Own the MLOps lifecycle, including model governance, testing standards, and incident response for production ML systems - Establish engineering standards for model deployment, testing, version control, and code quality - Design and implement monitoring solutions for model performance, data quality, and system health - Collaborate with cross-functional teams to deliver scalable ML solutions with measurable impact - Provide technical leadership in ML engineering best practices and mentor junior engineers in MLOps principles Position Requirements: - 8+ years of experience in building and deploying ML model pipelines with a focus on marketing - Expertise in AWS services, particularly SageMaker and MLflow, for ML experiment tracking and lifecycle management - Proficiency in containerization, workflow orchestration, Python programming, ML frameworks, and software engineering best practices - Experience with MLOps practices, feature engineering, feature store implementations, and big data technologies - Track record of leading ML initiatives with measurable marketing impact and strong collaboration skills Join us at Salesforce to drive transformative business impact and shape the future of customer engagement through innovative AI solutions.,

Posted 6 days ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking an experienced and visionary individual to play a pivotal role in internal software development at Amgen India. This role is critical in driving the strategy, development, and implementation of software solutions on the global commercial side. You will be responsible for setting strategic direction, clearly defining operations, delivering reusable software solutions for business and engineering teams, and ensuring the successful adoption of internal platforms across Amgen. The successful candidate will lead a team of engineers, product managers, and architects to deliver software applications that enhance our products and services. Roles & Responsibilities: The primary responsibilities of this key leadership position will include but are not limited to the following: Develop strategic vision for software platform services in alignment with the company’s overall strategy. Provide support to the Amgen Technology Executive Leadership and oversee the development of a Community of Practice for software Platforms. Foster a culture of innovation, identify and implement software solutions that drive value to our stakeholders Ensure the adoption of best practices and latest advancements in technologies across functions and business. Drive the design, development and deployment of scalable software platforms and reusable accelerators that enable and increase the value of application and product teams across the enterprise. Ensure the security, reliability of software platforms and seamless integration with existing systems. Drive the software platform capabilities implementation, ensuring timely delivery within scope and budget. Collaborate with cross functional teams to understand demand and develop solutions to meet business needs. Develop and enforce governance frameworks to manage the usage and adoption of software platforms. Lead and mentor a team of Engineers and Architects and foster a culture of continuous development and learning. Monitor team performance and present updates to executive leadership and key stakeholders. Functional Skills: Must-Have Skills: 18 to 23 years of experience in full stack software engineering, cloud computing with a robust blend of technical expertise, strategic thinking and leadership abilities focusing on software development. Demonstrated experience in managing large-scale technology projects and teams with a track record of delivering innovative and impactful solutions. Hands on experience with latest framework and libraries, such as LangChain, llamaindex, Agentic framework, vectorDB, LLM, Experienced with CICD DevOps/MLOps. Hands on experience with cloud computing services, such as AWS Lambda, container technology, SQL, NoSQL databases, API Gateway, SageMaker, Bedrock, etc. Good-to-Have Skills: Proficient in Python, JavaScript, SQL; Hands on experience with full stack software development, NoSQL database, docker container, container orchestration system, automated testing, and CICD DevOps Build a high performing team of software development experts, foster a culture of innovation, and ensure employee growth and satisfaction to drive long-term organizational success Identify opportunities for process improvements and drive initiatives to enhance the efficiency of the development lifecycle. Stay updated with the latest industry trends and advancements in software technology, provide strategic leadership, and explore new opportunities for innovation. Be an interdisciplinary team leader who is innovative, accountable, reliable, and able to thrive in a constantly evolving environment. Facilitate technical discussions and decision-making processes within the team. Preferred Professional Certifications Cloud Platform certification (AWS, Azure, GCP), specialized in solution architect, DevOps Platform certification (AWS, Azure, GCP, Databricks) Soft Skills: Exceptional communication and people skills to effectively manage stakeholder relationships and build new partnerships. Excellent verbal and written communication skills/writing skills; active listening skills; attention to detail. Strong process/business writing skills. Experience in people management and passion for mentorship, culture and fostering the development of talent. Ability to translate business and stakeholder feedback into accurate and efficient processes using clear language and format. Strong analytic/critical-thinking and decision-making abilities. Must be flexible and able to manage multiple activities and priorities with minimal direction in a rapidly changing and demanding environment. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 6 days ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role : AI Governance Director Department : Legal, AI Governance Job Location : Bangalore/Mumbai Experience Range : 12+ Years Job Summary The AI Governance director at LTIMindtree will play a pivotal role in shaping and safeguarding the organization’s enterprise-wide AI Governance and Compliance Program. This position acts as a critical bridge between business, technology, cybersecurity, data privacy, and governance functions, ensuring that AI is deployed responsibly, ethically, and in alignment with global regulatory standards. This role will drive the development and continuous evolution of AI policies, conduct Responsible AI assessments, ensure regulatory compliance, and champion stakeholder education. By embedding Responsible AI (RAI) principles into the organization’s DNA, the officer will ensure LTIMindtree remains a trusted and forward-thinking leader in the IT services and consulting industry. This role owns the enterprise accountability framework for Responsible AI, with binding authority to enforce compliance. The role will mandate collaboration with various stakeholders and drafting standards, tool kits and frameworks required for Responsible AI adoption. The role will be responsible for championing adoption of governance practices by embedding controls into business workflows, driving cultural change, and measuring policy uptake across teams." Key Responsibilities 1) AI Compliance Strategy & Governance Design and lead the enterprise-wide Responsible AI governance framework adoption. Develop compliance roadmaps for AI/ML initiatives in collaboration with business and technical stakeholders. Collaborate and coordinate with business and IT leadership for governing AI Risk & Ethics governance. Be a part of and provide inputs to the AI governance board Define and institutionalize “AI risk appetite” and “compliance thresholds” for AI/ML deployments. As a part of the AI governance office charter manage the “enterprise-wide AI governance framework” aligned with EU AI Act, NIST AI RMF, OECD AI Principles, and other emerging regulations Implement, Manage and Govern the AI assurance framework 2) Policy Development & Implementation Map and maintain the regulatory landscape in line with the Responsible AI framework Draft and maintain AI-related policies, procedures, and controls across the organization. Work with AI governance office and maintain the Regulatory compliance Ensure AI governance aligns with internal policies and external standards like ISO, GDPR, HIPAA, AI regulations and client-specific requirements. Build and manage standard operating procedures (SOPs) and Tool kits for AI lifecycle management and risk controls. Collaborate and assist “InfoSec” to integrate AI compliance into “DevSecOps & MLOps pipelines” 3) Responsible AI framework implementation, governance & Oversight Manage and improvise the Responsible AI assessment frameworks tailored for AI use cases (e.g., bias, security, explainability, and related risks). Collaborate with Technology teams to assess AI models and recommend mitigations. Collaborate with Technology and Quality assurance teams to implement the Responsible AI testing framework Own and represent AI governance for internal and external audits Maintain AI risk register, including use case risk profiling and residual risk monitoring. Implement “AI audit mechanisms” (model monitoring, impact assessments) Institutionalize the AI impact assessments from AI inventory, Risk categorization and AI assurance assessments Ensure all AI systems adopt the AI impact assessment framework through the AI lifecycle Implement, Institutionalize and monitor AI system approval process 4) Regulatory Monitoring and Engagement Track and analyze global regulatory developments (e.g., EU AI Act, NIST AI RMF, OECD Guidelines, India’s DPDP Act). along with the Privacy office and AI governance office Map and maintain the regulatory landscape in line with the Responsible AI framework Act as liaison to legal and government affairs teams to assess impact of evolving laws. Engage with industry bodies (Partnership on AI, IEEE, ISO) to shape AI standards. Prepare compliance documentation and assist in regulatory or client audits involving AI. 5) Training and Culture Building Own the design and roll out of Responsible AI training modules across technical, business, and executive audiences. Promote awareness of AI ethics and responsible innovation culture across the organization. Drive change management and accountability culture through internal campaigns and workshops. Create “AI playbooks” and “AI tool kits” for AI Development, Deployment teams. 6) Client Engagement & Advisory Advise clients on “Responsible AI framework” and “AI governance framework”. Support pre-sales & proposals with AI governance insights. Collaborate with the Delivery excellence team and Project teams to ensure AI solutions meet client contractual and regulatory obligations. 7) Accountability & Enforcement Own end-to-end accountability for implementing the Responsible AI framework, AI Governance, AI assurance, AI Literacy, Responsible AI toolkit adoption, AI risk management and AI compliance breaches. Escalate AI deployments failing risk/compliance thresholds and escalate to the AI governance office/AIGB. 8) Adoption & Change Management Drive **enterprise-wide adoption of Responsible AI practices, AI policies, responsible AI impact assessments through: AI impact assessments Mandatory compliance gates** in AI project lifecycles (e.g., ethics review before model deployment). Integration with existing workflows** (e.g., SDLC, procurement, sales). Define and track **adoption KPIs** (e.g., "% of AI projects passing RAI audits"). Key Competencies Domain: Strong understanding of Responsible AI framework and AI governance Domain: Understanding of AI regulations (EU AI Act, NIST RMF), AI ethics Technical: AI/ML lifecycle, MLOps, XAI, AI security, Agentic AI, GRC tools Technical: AI systems assessments and defining assessment parameters and standards Leadership: Stakeholder influence, compliance strategy, cross-functional collaboration Ability to adopt new technologies and have experience in putting together a compliance framework Ability to understand frameworks and translate them into process and enable the organization for effective adoption via frameworks, toolkits, guidelines etc. Excellent communication skills Excellent presentation skills Excellent collaborative skills Excellent research skills Ability to come up with frameworks for new tech adoption Proactively take on ownership of tasks and take them to closure Required Qualifications 12-18 years in Information Technology, Compliance, Technical governance, Risk management, with 3+ years in AI/ML-related domains. Strong knowledge of AI regulatory frameworks (EU AI Act, NIST AI RMF, OECD AI Principles). Experience working with cross-functional teams (Delivery, InfoSec, Legal, Data Privacy). Familiarity with AI/ML model lifecycle (training, validation, testing, deployment, monitoring). Preferred Qualifications (Optional) Background in Law, Public Policy, Data Governance, or AI Ethics. Certifications in AI Governance (AIGB, IAPP CIPM/CIPT, MIT RAII), Privacy (CIPP/E) Experience in Global IT services/consulting firms/product companies Exposure to data-centric AI product governance or AI MLOps platforms (e.g., Azure ML, SageMaker, DataRobot), Agentic AI implementation, etc.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Greater Bengaluru Area

On-site

What if the work you did every day could impact the lives of people you know? Or all of humanity? At Illumina, we are expanding access to genomic technology to realize health equity for billions of people around the world. Our efforts enable life-changing discoveries that are transforming human health through the early detection and diagnosis of diseases and new treatment options for patients. Working at Illumina means being part of something bigger than yourself. Every person, in every role, has the opportunity to make a difference. Surrounded by extraordinary people, inspiring leaders, and world changing projects, you will do more and become more than you ever thought possible. Position Summary We are seeking a highly skilled Senior Data Engineer Developer with 5+ years of experience to join our talented team in Bangalore. In this role, you will be responsible for designing, implementing, and optimizing data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Additionally, you will bring strong domain expertise in operations organizations, with a focus on supply chain and manufacturing functions. If you're a seasoned data engineer with a proven track record of delivering impactful data solutions in operations contexts, we want to hear from you. Responsibilities Lead the design, development, and optimization of data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Apply strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing, to understand data requirements and deliver tailored solutions. Utilize big data processing frameworks such as Apache Spark to process and analyze large volumes of operational data efficiently. Implement data transformations, aggregations, and business logic to support analytics, reporting, and operational decision-making. Leverage cloud-based data platforms such as Snowflake to store and manage structured and semi-structured operational data at scale. Utilize dbt (Data Build Tool) for data modeling, transformation, and documentation to ensure data consistency, quality, and integrity. Monitor and optimize data pipelines and ETL processes for performance, scalability, and reliability in operations contexts. Conduct data profiling, cleansing, and validation to ensure data quality and integrity across different operational data sets. Collaborate closely with cross-functional teams, including operations stakeholders, data scientists, and business analysts, to understand operational challenges and deliver actionable insights. Stay updated on emerging technologies and best practices in data engineering and operations management, contributing to continuous improvement and innovation within the organization. All listed requirements are deemed as essential functions to this position; however, business conditions may require reasonable accommodations for additional task and responsibilities. Preferred Experience/Education/Skills Bachelor's degree in Computer Science, Engineering, Operations Management, or related field. 5+ years of experience in data engineering, with proficiency in Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing. Strong domain expertise in life sciences manufacturing equipment, with a deep understanding of industry-specific challenges, processes, and technologies. Experience with big data processing frameworks such as Apache Spark and cloud-based data platforms such as Snowflake. Hands-on experience with data modeling, ETL development, and data integration in operations contexts. Familiarity with dbt (Data Build Tool) for managing data transformation and modeling workflows. Familiarity with reporting and visualization tools like Tableau, Powerbi etc. Good understanding of advanced data engineering and data science practices and technologies like pypark, sagemaker, cloudera MLflow etc. Experience with SAP, SAP HANA and Teamcenter applications is a plus. Excellent problem-solving skills, analytical thinking, and attention to detail. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and operations stakeholders. Eagerness to learn and adapt to new technologies and tools in a fast-paced environment. We are a company deeply rooted in belonging, promoting an inclusive environment where employees feel valued and empowered to contribute to our mission. Built on a strong foundation, Illumina has always prioritized openness, collaboration, and seeking alternative perspectives to propel innovation in genomics. We are proud to confirm a zero-net gap in pay, regardless of gender, ethnicity, or race. We also have several Employee Resource Groups (ERG) that deliver career development experiences, increase cultural awareness, and offer opportunities to engage in social responsibility. We are proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, color, gender, religion, marital status, domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition, sexual orientation, pregnancy, military or veteran status, citizenship status, and genetic information. Illumina conducts background checks on applicants for whom a conditional offer of employment has been made. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable local, state, and federal laws. Background check results may potentially result in the withdrawal of a conditional offer of employment. The background check process and any decisions made as a result shall be made in accordance with all applicable local, state, and federal laws. Illumina prohibits the use of generative artificial intelligence (AI) in the application and interview process. If you require accommodation to complete the application or interview process, please contact accommodations@illumina.com. To learn more, visit: https://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf. The position will be posted until a final candidate is selected or the requisition has a sufficient number of qualified applicants. This role is not eligible for visa sponsorship.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨‍💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑‍💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.

Posted 6 days ago

Apply

12.0 years

3 - 7 Lacs

Hyderābād

On-site

Enterprise Architect (LLMs, GenAI, AI/ML) Experience: 12+ years Location: India Note: This role requires flexibility to travel or relocate to Abu Dhabi (UAE) for onsite client requirements (typically 2 to 3 months at a time). About the Role NorthBay Solutions seeks an Enterprise Architect with proven GenAI leadership, strong AI/ML foundation, and solid enterprise architecture background. We want seasoned professionals with deployment experience across on-premises, cloud, and hybrid environments. Core Responsibilities Design end-to-end AI-powered enterprise solutions integrating traditional systems with AI/ML Lead 8–12 engineers across full-stack, DevOps, and AI/ML specializations in Agile environments Drive technical decisions spanning infrastructure, databases, APIs, microservices, and AI components Technical Requirements GenAI Leadership (3+ Years) – PRIMARY Expert-level LLM experience (LLAMA, Mistral, GPT) including fine-tuning and deployment Agentic AI, MultiAgent systems, Agentic RAG implementations Vector Databases, LangChain, Langraph, and modern GenAI toolchains Advanced prompt engineering and Chain-of-Thought techniques 3+ production GenAI applications successfully implemented AI/ML Foundation (4–5 Years) Hands-on AI/ML experience building production systems with Python, TensorFlow, PyTorch ML model lifecycle management from development to deployment and monitoring Integration of AI/ML models with existing enterprise architectures Enterprise Architecture & Deployment (8+ Years) Full-stack development with MERN stack or equivalent Deployment experience across on-premises, cloud, and hybrid environments Kubernetes expertise for container orchestration and deployment Database proficiency: SQL (PostgreSQL/MySQL) and NoSQL (MongoDB/DynamoDB) API development: RESTful services, GraphQL, microservices architecture DevOps experience: Docker, CI/CD pipelines, infrastructure automation Cloud & Infrastructure Strong cloud experience (AWS/Azure/GCP) with ML services AWS: SageMaker, Bedrock, Lambda, API Gateway or equivalent Hybrid architecture design combining on-prem, cloud, and AI/ML systems Proven Delivery 7+ complex projects delivered across enterprise systems and AI/ML solutions Team leadership experience managing diverse technical teams Requirements (Good to have) AWS/Azure/GCP certifications (Solutions Architect + ML Specialty preferred) Strong communication skills bridging business and technical requirements Agile/Scrum leadership experience with measurable team performance improvements Ideal Candidate Priority expertise: GenAI Leadership AI/ML Foundation Enterprise Architecture with deployment experience across on-premises, cloud, and hybrid environments. h3qtfPMab1

Posted 6 days ago

Apply

12.0 years

0 Lacs

Hyderābād

On-site

Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. We are seeking a highly experienced and dynamic Practice Head – Data Science & AI to lead our data practice in Hyderabad. This role is ideal for a technology leader with a strong foundation in Data Science, Artificial Intelligence (AI), and Machine Learning (ML), along with proven experience in building and scaling data practices. The ideal candidate will also have a strong business acumen with experience in solution selling and pre-sales. Key Responsibilities: Leadership & Strategy: Define and drive the strategic vision for the Data Science and AI practice. Build, lead, and mentor a high-performing team of data scientists, ML engineers, and AI experts. Collaborate with cross-functional teams to integrate data-driven solutions into broader business strategies. Technical Expertise: Lead the design and delivery of advanced AI/ML solutions across various domains. Stay abreast of industry trends, emerging technologies, and best practices in AI, ML, and data science. Provide technical guidance and hands-on support as needed for key initiatives. Practice Development: Establish frameworks, methodologies, and best practices to scale the data science practice. Define and implement reusable components, accelerators, and IPs for efficient solution delivery. Client Engagement & Pre-Sales: Support business development by working closely with sales teams in identifying opportunities, creating proposals, and delivering presentations. Engage in solution selling by understanding client needs and proposing tailored AI/ML-based solutions. Build strong relationships with clients and act as a trusted advisor on their data journey. Required Skills & Experience: 12+ years of overall experience with at least 5+ years in leading data science/AI teams. Proven experience in setting up or leading a data science or AI practice. Strong hands-on technical background in AI, ML, NLP, predictive analytics, and data engineering. Experience with tools and platforms like Python, R, TensorFlow, PyTorch, Azure ML, AWS SageMaker, etc. Strong understanding of data strategy, governance, and architecture. Demonstrated success in solutioning and pre-sales engagements. Excellent communication, leadership, and stakeholder management skills.

Posted 6 days ago

Apply

0.0 years

2 - 9 Lacs

Gurgaon

On-site

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Gartner is seeking a talented and passionate MLOps Engineer to join our growing team. In this role, you will be responsible for building Python and Spark-based ML solutions that ensure the reliability and efficiency of our machine learning systems in production. You will collaborate closely with data scientists to operationalize existing models and optimize our ML workflows. Your expertise in Python, Spark, model inferencing, and AWS services will be crucial in driving our data-driven initiatives. What you’ll do: Develop ML inferencing and data pipelines with AWS tools (S3, EMR, Glue, Athena). Python API development using Frameworks like FASTAPI and DJANGO Deploy and optimize ML models on SageMaker and EKS Implement IaC with Terraform and CI/CD for seamless deployments. Ensure quality, scalability and performance of API’s. Collaborate with product manager, data scientists and other engineers for smooth operations. Communicate technical insights clearly and support production troubleshooting when needed. What you’ll need: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Must have: 0-2 years of experience building data and MLOPS pipelines using Python and Spark. Strong proficiency in Python. Exposure to Spark is good to have. Hands-on experience Restful development using Python frameworks like Fast API and Django Experience with Docker and Kubernetes (EKS) or Sagemaker. Experience with CloudFormation or Terraform for deploying and managing AWS resources. Strong problem-solving and analytical skills. Ability to work effectively within a agile environment. Who you are: Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication and prioritization skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for successful delivery of the solutions. Strong desire to improve upon their skills in software testing and technologies Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. #LI-AJ4 Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work . What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com . Job Requisition ID:101728 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.

Posted 6 days ago

Apply

8.0 years

4 - 5 Lacs

Chennai

On-site

Gen AI Engineer WorkMode :Hybrid Work Location : Chennai / Hyderabad Work Timing : 2 PM to 11 PM Primary : GEN AI Python, AWS Bedrock, Claude, Sagemaker , Machine Learning experience) 8+ years of full-stack development experience 5+ years of AI/ Gen AI development Strong proficiency in JavaScript/TypeScript, Python, or similar languages Experience with modern frontend frameworks (React, Vue.js, Angular) Backend development experience with REST APIs and microservices Knowledge of AWS services, specifically AWS Bedrock, Sagemaker Experience with generative AI models, LLM integration and Machine Learning Understanding of prompt engineering and model optimization Hands-on experience with foundation models (Claude, GPT, LLaMA, etc.) Experience retrieval-augmented generation (RAG) Knowledge of vector databases and semantic search AWS cloud platform expertise (Lambda, API Gateway, S3, RDS, etc.) Knowledge of financial regulatory requirements and risk frameworks. Experience integrating AI solutions into financial workflows or trading systems. Published work or patents in financial AI or applied machine learning. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

General Summary: The Senior AI Engineer (2–5 years' experience) is responsible for designing and implementing intelligent, scalable AI solutions with a focus on Retrieval-Augmented Generation (RAG), Agentic AI, and Modular Cognitive Processes (MCP). This role is ideal for individuals who are passionate about the latest AI advancements and eager to apply them in real-world applications. The engineer will collaborate with cross-functional teams to deliver high-quality, production-ready AI systems aligned with business goals and technical standards Essential Duties & Responsibilities: Design, develop, and deploy AI-driven applications using RAG and Agentic AI frameworks. Build and maintain scalable data pipelines and services to support AI workflows. Implement RESTful APIs using Python frameworks (e.g., FastAPI, Flask) for AI model integration. Collaborate with product and engineering teams to translate business needs into AI solutions. Debug and optimize AI systems across the stack to ensure performance and reliability. Stay current with emerging AI tools, libraries, and research, and integrate them into projects. Contribute to the development of internal AI standards, reusable components, and best practices. Apply MCP principles to design modular, intelligent agents capable of autonomous decision-making. Work with vector databases, embeddings, and LLMs (e.g., GPT-4, Claude, Mistral) for intelligent retrieval and reasoning. Participate in code reviews, testing, and validation of AI components using frameworks like pytest or unittest. Document technical designs, workflows, and research findings for internal knowledge sharing. Adapt quickly to evolving technologies and business requirements in a fast-paced environment. Knowledge, Skills, and/or Abilities Required: 2–5 years of experience in AI/ML engineering, with at least 2 years in RAG and Agentic AI. Strong Python programming skills with a solid foundation in OOP and software engineering principles. Hands-on experience with AI frameworks such as LangChain, LlamaIndex, Haystack, or Hugging Face. Familiarity with MCP (Modular Cognitive Processes) and their application in agent-based systems. Experience with REST API development and deployment. Proficiency in CI/CD tools and workflows (e.g., Git, Docker, Jenkins, Airflow). Exposure to cloud platforms (AWS, Azure, or GCP) and services like S3, SageMaker, or Vertex AI. Understanding of vector databases (e.g., OpenSearch, Pinecone, Weaviate) and embedding techniques. Strong problem-solving skills and ability to work independently or in a team. Interest in exploring and implementing cutting-edge AI tools and technologies. Experience with SQL/NoSQL databases and data manipulation. Ability to communicate technical concepts clearly to both technical and non-technical audiences. Educational/Vocational/Previous Experience Recommendations: Bachelor/ Master degree or related field. 2+ years of relevant experience Working Conditions: Hybrid - Pune Location

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies