Jobs
Interviews

1521 Sagemaker Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Title : Data Scientist Experience : 6 to 10 Years Location : Noida, Bangalore, Pune Employment Type : Full-time Job Summary We are seeking a highly skilled and experienced Data Scientist with a strong background in Natural Language Processing (NLP), Generative AI, and Large Language Models (LLMs). The ideal candidate will be proficient in Python and have hands-on experience working with both Google Cloud Platform (GCP) and Amazon Web Services (AWS). You will play a key role in designing, developing, and deploying AI-driven solutions to solve complex business problems. Key Responsibilities Design and implement NLP and Generative AI models for use cases such as chatbots, text summarization, question answering, and information extraction. Fine-tune and deploy Large Language Models (LLMs) using frameworks such as Hugging Face Transformers or LangChain. Conduct experiments, evaluate model performance, and implement improvements for production-scale solutions. Collaborate with cross-functional teams including product managers, data engineers, and ML engineers. Deploy and manage ML models on cloud platforms (GCP and AWS), using services such as Vertex AI, SageMaker, Lambda, Cloud Functions, etc. Build and maintain ML pipelines for training, validation, and deployment using CI/CD practices. Communicate complex technical findings in a clear and concise manner to both technical and non-technical stakeholders. Required Skills Strong proficiency in Python and common data science/ML libraries (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Proven experience in Natural Language Processing (NLP) techniques (NER, sentiment analysis, embeddings, topic modeling, etc.). Hands-on experience with Generative AI and LLMs (e.g., GPT, BERT, T5, LLaMA, Claude, Gemini). Experience with LLMOps, prompt engineering, and fine-tuning pre-trained language models. Experience with GCP (BigQuery, Vertex AI, Cloud Functions, etc.) and/or AWS (SageMaker, S3, Lambda, etc.). Familiarity with containerization (Docker), orchestration (Kubernetes), and model deployment best practices (ref:hirist.tech)

Posted 3 days ago

Apply

8.0 - 12.0 years

0 Lacs

punjab

On-site

As a Senior AI/ML Engineer/Lead at our company located in Mohali, you will play a crucial role in leading the design, architecture, and delivery of complex AI/ML solutions. With over 8 years of experience, you will be responsible for developing and implementing machine learning models across various domains like NLP, computer vision, recommendation systems, classification, and regression. Your expertise will be utilized to integrate Large Language Models (LLMs) such as GPT, BERT, LLaMA, and other state-of-the-art transformer architectures into our enterprise-grade applications to drive innovation. Your key responsibilities will include selecting and implementing ML frameworks, tools, and cloud technologies that align with our business goals. You will lead AI/ML experimentation, PoCs, benchmarking, and model optimization initiatives while collaborating with data engineering, software development, and product teams to integrate ML capabilities seamlessly into production systems. Additionally, you will establish and enforce robust MLOps pipelines covering CI/CD for ML, model versioning, reproducibility, and monitoring to ensure reliability at scale. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field. Your hands-on experience in the AI/ML domain, along with a proven track record of delivering production-grade ML systems, will be highly valued. Proficiency in machine learning algorithms, deep learning architectures, and advanced neural network design is essential. You should also possess expertise in LLMs, transformer-based models, prompt engineering, and embeddings, with strong programming skills in Python and familiarity with cloud platforms for scalable ML workloads. Having experience with MLOps tools and practices, such as MLflow, Kubeflow, SageMaker, or equivalent, will be beneficial. Furthermore, exposure to vector databases, real-time AI systems, edge AI, streaming data environments, and active contributions to open-source projects, research publications, or thought leadership in AI/ML will be advantageous. A certification in AI/ML would be a significant plus. If you are a self-driven individual with excellent leadership, analytical thinking, and communication skills, and if you are passionate about staying updated on AI advancements and championing their adoption in business use, we encourage you to apply for this exciting opportunity.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

We are seeking a driven individual with a strong financial knowledge and an analytical mindset. As a motivated team player, you will excel in maintaining efficiency and accuracy while multitasking. To be a strong candidate for this role, your experience in financial services and proven understanding of products will be crucial. Additionally, you should be a strong written and verbal communicator to effectively interact with CSU/Field RPs. In this role, you will be responsible for working with Surveillance internal teams and business partners to define and document business requirements. Engaging with Business counterparts to ensure solutions align with business requirements and readiness levels. You will translate business requirements into actionable solutions and deliver on complex ad-hoc business analysis requests. Furthermore, you will coordinate and prioritize business needs in a matrix management environment, documenting and communicating results and recommendations to both external and internal teams. The ideal candidate should possess 4-6 years of experience in the analytics industry with a strong background in Financial Services. You should have excellent quantitative, analytical, programming, and problem-solving skills. Proficiency in MS Excel, PowerPoint, and Word is essential. A highly motivated self-starter with exceptional communication skills is desired, along with the ability to work effectively in a team environment on multiple projects. Candidates should be willing to learn tools like Python, SQL, PowerApps & PowerBI. Series 7 or SIE certification is preferred. Experience with AWS Infrastructure and knowledge of tools like SageMaker and Athena are advantageous. Ameriprise India LLP has been a trusted provider of client-based financial solutions for 125 years. As a U.S.-based financial planning company, headquartered in Minneapolis with a global presence, our focus areas include Asset Management and Advice, Retirement Planning, and Insurance Protection. Join our inclusive and collaborative culture that values your contributions and offers opportunities for growth and development. If you are talented, driven, and seeking to work for an ethical company that cares, take the next step and build your career at Ameriprise India LLP. This is a full-time position with working hours from 2:00 pm to 10:30 pm. The role is part of the AWMPO AWMP&S President's Office within the Legal Affairs job family group.,

Posted 3 days ago

Apply

7.0 - 11.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Specialist Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 7 to 11 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for? Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience – Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following – Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains: Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications: Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Building data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Utilizing advanced statistical and machine learning techniques to develop models that can assist in decision-making and strategic planning. Refining and improving data science models based on feedback, new data, and evolving business needs. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Job Description: Senior MLOps Engineer Position: Senior MLOps Engineer Location: Gurugram Relevant Experience Required: 6+ years Employment Type: Full-time About The Role We are seeking a Senior MLOps Engineer with deep expertise in Machine Learning Operations, Data Engineering, and Cloud-Native Deployments . This role requires building and maintaining scalable ML pipelines , ensuring robust data integration and orchestration , and enabling real-time and batch AI systems in production. The ideal candidate will be skilled in state-of-the-art MLOps tools , data clustering , big data frameworks , and DevOps best practices , ensuring high reliability, performance, and security for enterprise AI workloads. Key Responsibilities MLOps & Machine Learning Deployment Design, implement, and maintain end-to-end ML pipelines from experimentation to production. Automate model training, evaluation, versioning, deployment, and monitoring using MLOps frameworks. Implement CI/CD pipelines for ML models (GitHub Actions, GitLab CI, Jenkins, ArgoCD). Monitor ML systems in production for drift detection, bias, performance degradation, and anomaly detection. Integrate feature stores (Feast, Tecton, Vertex AI Feature Store) for standardized model inputs. Data Engineering & Integration Design and implement data ingestion pipelines for structured, semi-structured, and unstructured data. Handle batch and streaming pipelines with Apache Kafka, Apache Spark, Apache Flink, Airflow, or Dagster. Build ETL/ELT pipelines for data preprocessing, cleaning, and transformation. Implement data clustering, partitioning, and sharding strategies for high availability and scalability. Work with data warehouses (Snowflake, BigQuery, Redshift) and data lakes (Delta Lake, Lakehouse architectures). Ensure data lineage, governance, and compliance with modern tools (DataHub, Amundsen, Great Expectations). Cloud & Infrastructure Deploy ML workloads on AWS, Azure, or GCP using Kubernetes (K8s) and serverless computing (AWS Lambda, GCP Cloud Run). Manage containerized ML environments with Docker, Helm, Kubeflow, MLflow, Metaflow. Optimize for cost, latency, and scalability across distributed environments. Implement infrastructure as code (IaC) with Terraform or Pulumi. Real-Time ML & Advanced Capabilities Build real-time inference pipelines with low latency using gRPC, Triton Inference Server, or Ray Serve. Work on vector database integrations (Pinecone, Milvus, Weaviate, Chroma) for AI-powered semantic search. Enable retrieval-augmented generation (RAG) pipelines for LLMs. Optimize ML serving with GPU/TPU acceleration and ONNX/TensorRT model optimization. Security, Monitoring & Observability Implement robust access control, encryption, and compliance with SOC2/GDPR/ISO27001. Monitor system health with Prometheus, Grafana, ELK/EFK, and OpenTelemetry. Ensure zero-downtime deployments with blue-green/canary release strategies. Manage audit trails and explainability for ML models. Preferred Skills & Qualifications Core Technical Skills Programming: Python (Pandas, PySpark, FastAPI), SQL, Bash; familiarity with Go or Scala a plus. MLOps Frameworks: MLflow, Kubeflow, Metaflow, TFX, BentoML, DVC. Data Engineering Tools: Apache Spark, Flink, Kafka, Airflow, Dagster, dbt. Databases: PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Vector Databases: Pinecone, Weaviate, Milvus, Chroma. Visualization: Plotly Dash, Superset, Grafana. Tech Stack Orchestration: Kubernetes, Helm, Argo Workflows, Prefect. Infrastructure as Code: Terraform, Pulumi, Ansible. Cloud Platforms: AWS (SageMaker, S3, EKS), GCP (Vertex AI, BigQuery, GKE), Azure (ML Studio, AKS). Model Optimization: ONNX, TensorRT, Hugging Face Optimum. Streaming & Real-Time ML: Kafka, Flink, Ray, Redis Streams. Monitoring & Logging: Prometheus, Grafana, ELK, OpenTelemetry.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Python developer We are looking for a talented Senior Software Developer with strong Python skills. You will join our AI Solutions team. We focus on creating advanced AI systems in the Electrification field. In this role, you will work with AI/ML engineers, data scientists, and cloud architects. Together, you will develop robust, scalable software solutions using the latest AI technologies. This is a great chance to work where AI innovation meets highperformance backend systems and cloud-native development. Responsibilities Write highquality, testable, and maintainable Python code using object-oriented programming (OOP), SOLID principles, and design patterns. Develop RESTful APIs and backend services for AI/ML model serving using FastAPI. Collaborate with AI/ML engineers to integrate and deploy Machine Learning, Deep Learning, and Generative AI models into production environments. Contribute to software architecture and design discussions to ensure scalable and cient solutions. Implement CI/CD pipelines and adhere to DevOps best practices for reliable and repeatable deployments. Design for observability, incorporating structured logging, performance monitoring, and alerting mechanisms. Optimize code and system performance, ensuring reliability and robustness at scale. Participate in code reviews, promote clean code practices, and mentor junior developers when needed. Required Qualifications Bachelor or Master degree in Computer Science, IT, or a related field.5+ years of handson experience in software development, with a focus on Python. Deep understanding of OOP concepts, software architecture, and design patterns. Experience with backend web frameworks, preferably FastAPI. Familiarity with integrating ML/DL models into software solutions. Practical experience with CI/CD, containerization (Docker), and version control systems (Git). Exposure to MLOps practices and tools for model deployment and monitoring. Strong collaboration and communication skills in crossfunctional engineering teams. Familiarity with cloud platforms like AWS (e.g., Sagemaker, Bedrock) or Azure (e.g., ML Studio, OpenAI Service). Preferred Qualifications Experience in Rust is a strong plus. Experience working on highperformance, scalable backend systems. Exposure to logging/monitoring stacks like Prometheus, Grafana, ELK, or OpenTelemetry. Understanding of data engineering concepts, ETL pipelines, and processing large datasets. Background or interest in the Power and Energy domain is a plus. Mandatory Skills Python AI/ML model serving using FastAPI. ML/DL models MLOps practices

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Andaman and Nicobar Islands, India

On-site

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description AI Architect Pune/Noida/Bangalore, India As an AI Architect, you will lead the design, development, and deployment of enterprise-grade artificial intelligence solutions that drive innovation and deliver measurable business value. You will report to the Director of Enterprise Architecture and work in a hybrid capacity from our Pune, India office. In this role, you will collaborate with cross-functional teams—including data scientists, engineers, and business leaders—to ensure AI systems are scalable, secure, and ethically aligned with organizational goals. Your Responsibilities Architect and implement AI solutions: Design and oversee end-to-end AI architectures that integrate seamlessly with existing IT and data infrastructure, ensuring scalability, performance, and maintainability. Lead cross-functional delivery teams: Guide data scientists, engineers, and business stakeholders through the full AI solution lifecycle—from ideation and prototyping to production deployment and monitoring. Evaluate and recommend technologies: Assess AI/ML platforms, frameworks, and tools (e.g., TensorFlow, PyTorch, cloud AI services) to ensure alignment with business needs and technical feasibility. Establish best practices: Define standards for model development, testing, deployment, and lifecycle management, ensuring compliance with ethical AI principles and data privacy regulations. Mentor and evangelize: Provide technical leadership and mentorship to junior architects and data professionals, while promoting AI adoption and architectural vision across the organization. The Essentials – You Will Have A Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5+ years of experience designing and implementing AI architectures in production environments. Proficiency with AI/ML frameworks and tools such as TensorFlow, PyTorch, Keras, Scikit-learn, and cloud-based AI services (e.g., AWS SageMaker, Azure ML). Strong programming skills in Python, R, Java, or similar languages. Deep understanding of data structures, algorithms, and software engineering best practices. Demonstrated experience leading complex AI projects with cross-functional teams. Proven track record of delivering AI solutions that drive business outcomes. Experience with ethical AI practices and compliance with data protection regulations. The Preferred – You Might Also Have Experience deploying AI solutions on cloud platforms (AWS, Azure, GCP) in hybrid or multi-cloud environments. Familiarity with MLOps tools and practices for continuous integration, deployment, and monitoring of AI models. Strong problem-solving skills and the ability to translate business requirements into scalable technical solutions. Experience mentoring and developing talent within AI or data science teams. What We Offer Our benefits package includes … Comprehensive mindfulness programs with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. Rockwell Automation's hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description AI Architect Pune/Noida/Bangalore, India As an AI Architect, you will lead the design, development, and deployment of enterprise-grade artificial intelligence solutions that drive innovation and deliver measurable business value. You will report to the Director of Enterprise Architecture and work in a hybrid capacity from our Pune, India office. In this role, you will collaborate with cross-functional teams—including data scientists, engineers, and business leaders—to ensure AI systems are scalable, secure, and ethically aligned with organizational goals. Your Responsibilities Architect and implement AI solutions: Design and oversee end-to-end AI architectures that integrate seamlessly with existing IT and data infrastructure, ensuring scalability, performance, and maintainability. Lead cross-functional delivery teams: Guide data scientists, engineers, and business stakeholders through the full AI solution lifecycle—from ideation and prototyping to production deployment and monitoring. Evaluate and recommend technologies: Assess AI/ML platforms, frameworks, and tools (e.g., TensorFlow, PyTorch, cloud AI services) to ensure alignment with business needs and technical feasibility. Establish best practices: Define standards for model development, testing, deployment, and lifecycle management, ensuring compliance with ethical AI principles and data privacy regulations. Mentor and evangelize: Provide technical leadership and mentorship to junior architects and data professionals, while promoting AI adoption and architectural vision across the organization. The Essentials – You Will Have A Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5+ years of experience designing and implementing AI architectures in production environments. Proficiency with AI/ML frameworks and tools such as TensorFlow, PyTorch, Keras, Scikit-learn, and cloud-based AI services (e.g., AWS SageMaker, Azure ML). Strong programming skills in Python, R, Java, or similar languages. Deep understanding of data structures, algorithms, and software engineering best practices. Demonstrated experience leading complex AI projects with cross-functional teams. Proven track record of delivering AI solutions that drive business outcomes. Experience with ethical AI practices and compliance with data protection regulations. The Preferred – You Might Also Have Experience deploying AI solutions on cloud platforms (AWS, Azure, GCP) in hybrid or multi-cloud environments. Familiarity with MLOps tools and practices for continuous integration, deployment, and monitoring of AI models. Strong problem-solving skills and the ability to translate business requirements into scalable technical solutions. Experience mentoring and developing talent within AI or data science teams. What We Offer Our benefits package includes … Comprehensive mindfulness programs with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. Rockwell Automation's hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Job Description AI Architect Pune/Noida/Bangalore, India As an AI Architect, you will lead the design, development, and deployment of enterprise-grade artificial intelligence solutions that drive innovation and deliver measurable business value. You will report to the Director of Enterprise Architecture and work in a hybrid capacity from our Pune, India office. In this role, you will collaborate with cross-functional teams—including data scientists, engineers, and business leaders—to ensure AI systems are scalable, secure, and ethically aligned with organizational goals. Your Responsibilities Architect and implement AI solutions: Design and oversee end-to-end AI architectures that integrate seamlessly with existing IT and data infrastructure, ensuring scalability, performance, and maintainability. Lead cross-functional delivery teams: Guide data scientists, engineers, and business stakeholders through the full AI solution lifecycle—from ideation and prototyping to production deployment and monitoring. Evaluate and recommend technologies: Assess AI/ML platforms, frameworks, and tools (e.g., TensorFlow, PyTorch, cloud AI services) to ensure alignment with business needs and technical feasibility. Establish best practices: Define standards for model development, testing, deployment, and lifecycle management, ensuring compliance with ethical AI principles and data privacy regulations. Mentor and evangelize: Provide technical leadership and mentorship to junior architects and data professionals, while promoting AI adoption and architectural vision across the organization. The Essentials – You Will Have A Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 5+ years of experience designing and implementing AI architectures in production environments. Proficiency with AI/ML frameworks and tools such as TensorFlow, PyTorch, Keras, Scikit-learn, and cloud-based AI services (e.g., AWS SageMaker, Azure ML). Strong programming skills in Python, R, Java, or similar languages. Deep understanding of data structures, algorithms, and software engineering best practices. Demonstrated experience leading complex AI projects with cross-functional teams. Proven track record of delivering AI solutions that drive business outcomes. Experience with ethical AI practices and compliance with data protection regulations. The Preferred – You Might Also Have Experience deploying AI solutions on cloud platforms (AWS, Azure, GCP) in hybrid or multi-cloud environments. Familiarity with MLOps tools and practices for continuous integration, deployment, and monitoring of AI models. Strong problem-solving skills and the ability to translate business requirements into scalable technical solutions. Experience mentoring and developing talent within AI or data science teams. What We Offer Our benefits package includes … Comprehensive mindfulness programs with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. Rockwell Automation's hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office.

Posted 4 days ago

Apply

5.0 years

7 - 20 Lacs

Noida, Uttar Pradesh, India

Remote

Location: Hybrid/ Remote Type: Contract / Full‑Time Experience: 5+ Years Qualification: Bachelor’s or Master’s in Computer Science or a related technical field Responsibilities Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation. Design and build Python‑based services (FastAPI) for generating and updating embeddings. Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning. Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store. Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat. Manage vector index lifecycle and similarity metrics (cosine/dot‑product). Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates. Collaborate with frontend engineers to define API contracts and demo endpoints. Document architecture diagrams, API specifications, and runbooks for future team onboarding. Required Skills Strong Python expertise (FastAPI, async programming). Proficiency with Node.js and Express for API development. Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search. Familiarity with OpenAI’s APIs (embeddings, chat completions). Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face). Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker). Containerization Skills (Docker) Good understanding of RAG architectures, prompt design, and memory management. Strong Git workflow and collaborative development practices (GitHub, CI/CD). Nice‑to‑Have Experience with Llama family models or other open‑source LLMs. Familiarity with MongoDB Atlas free tier and cluster management. Background in data engineering for streaming or batch processing. Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch). Frontend skills in React to prototype demo UIs. Skills:- Artificial Intelligence (AI), Generative AI, Python, NodeJS (Node.js), Vector database, Amazon Web Services (AWS), Docker, Retrieval Augmented Generation (RAG) and CI/CD

Posted 4 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities About JSM Team Jira Service Management is one of the marquee products of Atlassian. Through this solution, we are helping technical and non-technical teams centralize and streamline service requests, respond to incidents, collect and maintain knowledge, manage assets and configuration items, and more. Specifically this team within JSM works on using assitive AI to automate IT operational tasks, troubleshoot problems, and reduce mental overload for oncall engineers and alike. By weaving these capabilities into our product we revolutionise AIOps by moving from a traditional reactive troubleshooting-based system to a proactive problem-solving approach What You Will Do As a Principal Engineer on the JSM team, you will get the opportunity to work on cutting-edge AI and ML algorithms that help modernize IT Operations by reducing MTTR (mean time to resolve), and MTTI (Mean time to identify). You will use your software development expertise to solve difficult problems, tackling complex infrastructure and architecture challenges. In This Role, You'll Get The Chance To Shape the future of AIOps: Be at the forefront of innovation, shaping the next generation of AI-powered operations tools that predict, prevent, and resolve IT issues before they impact our customers Master Generative AI: Delve into the world of generative models, exploring their potential to detect anomalies, automate responses, and personalize remediation plans Become a machine learning maestro: Hone your skills in both supervised and unsupervised learning, building algorithms that analyze mountains of data to uncover hidden patterns and optimize system performance Collaborate with diverse minds: Partner with a brilliant team of engineers, data scientists, and researchers, cross-pollinating ideas and learning from each other's expertise Make a tangible impact: Your work will directly influence the reliability and performance of Atlassian's critical software, driving customer satisfaction and propelling our business forward. Routinely tackle complex architectural challenges, spar with principal engineers to build ML pipelines and models that scale for thousands of customers Lead code reviews & documentation as well as take on complex bug fixes, especially on high-risk problems Our tech stack is primarily Python/Java/Kotlin built on AWS. On your first day, we’ll expect you to have Fluency in Python Solid understanding of machine learning concepts and algorithms, including supervised and unsupervised learning, deep learning, and NLP. Familiarity with popular ML libraries like sci-kit-learn, Keras/TensorFlow/PyTorch, numpy, pandas Good Understanding of Machine Learning project lifecycle Experience in architecting and implementing high-performance RESTful microservices ( API development for ML Models ) Familiarity with MLOps and experience with scaling and deploying Machine Learning models It would be great, but not required if you have Experience with cloud-based machine learning platforms (e.g., AWS SageMaker, Azure ML Service, Databricks). Experience with MLOps tools ( MLflow, Tecton, Pinecone, Feature Stores ) Experience with AIOps or related fields like IT automation or incident management. Experience building and operating large-scale distributed systems using Amazon Web Services (S3, Kinesis, Cloud Formation, EKS, AWS Security and Networking). Experience with using OpenAI LLMs. Qualifications Compensation Skills At Atlassian, we strive to design equitable, explainable, and competitive compensation programs. To support this goal, the baseline of our range is higher than that of the typical market range, but in turn we expect to hire most candidates near this baseline. Base pay within the range is ultimately determined by a candidate's skills, expertise, or experience. In the United States, we have three geographic pay zones. For this role, our current base pay ranges for new hires in each zone are: Zone A: $199,400 - $265,800 Zone B: $179,400 - $239,200 Zone C: $165,500 - $220,600 This role may also be eligible for benefits, bonuses, commissions, and equity. Please visit go.atlassian.com/payzones for more information on which locations are included in each of our geographic pay zones. However, please confirm the zone for your specific location with your recruiter. Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 4 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Entity: Customers & Products Job Family Group: Project Management Group Job Description: As bp transitions to a coordinated energy company, we must adapt to a changing world and maintain driven performance. bp’s customers & products (C&P) business area is setting up a business and technology centre (BTC) in CITY, COUNTRY . This will support the delivery of an enhanced customer experience and drive innovation by building global capabilities at scale, using technology, and developing deep expertise . The BTC will be a core and connected part of our business, bringing together colleagues who report into their respective part of C&P, working together with other functions across bp. This is an exciting time to join bp and the customers & products BTC! Job Title: Data Modeller SME Lead About the role: As the Data Modeller Senior SME for Castrol you will collaborate with business partners across Digital Operational Excellence, Technology, and Castrol’s PUs, HUBs, functions, and Markets to model and sustain curated datasets within the Castrol Data ecosystem. The role ensures agile, continuous improvement of curated datasets aligned with the Data Modelling Framework and supports analytics, data science, operational MI, and the broader Digital Business Strategy. On top of the Data lake we now have enabled the MLOPS environment (PySpark Pro) and Gurobi with direct connections to run the advance analytics and data science queries and algorithms written in python. This enables the data analyst and data science team to incubate insights in an agile way. The Data Modeller role will chip in and enable the growth trajectory on data science skills and capabilities within the role, the team and the wider Castrol data analyst/science community, data science experience is a plus but basic skills would suffice to start. Experience & Education: Education: Degree in an analytical field (preferably IT or engineering) or 5+ years of relevant experience Experience: Proven track record in delivering data models and curated datasets for major transformation projects. Broad understanding of multiple data domains and their integration points. Strong problem-solving and collaborative skills with a strategic approach. Skills & Competencies: Expertise in data modeling, data wrangling of highly complex, high-dimensional data (ER Studio, Gurobi, SageMaker PRO). Proficiency in translating analytical insights from high-dimensional data. Skilled in PowerBI data modeling and proof of concept design for data and analytics dashboarding. Proficiency in Data Science tools such as Python, Amazon SageMaker, GAMS, AMPL, ILOG, AIMMS, or similar. Ability to work across multiple levels of detail, including Analytics, MI, statistics, data, process design principles, operating model intent, and systems design. Strong influencing skills to use expertise and experience to shape value delivery. Demonstrated success in multi-functional deployments and performance optimization. BP Behaviors for Successful Delivery: Respect: Build trust through clear relationships Excellence : Apply standard processes and strive for executional completion One Team: Collaborate to improve team efficiency You will work with: You will be part of a 20 member Global Data & Analytics Team. You will operate peer to peer in a team of global seasoned experts on Process, Data, Advanced Analytics and Data Science. The Global Data & Analytics team reports into the Castrol Digital Enablement team that is managing the digital estate for Castrol where we enhance scalability, process and data integration. This D&A team is the driving force behind the Data & Analytics strategy managing the Harmonized Data Lake and the Business Intelligence derived from it, in support of the Business strategy and is a key pilar of value enablement through fast and accurate insights. As the Data Modeller SME lead you will be exposed to a wide variety of collaborators in all layers of the Castrol Leadership and our partners in GBS and Technology. Through Data Governance at Value centre you have great exposure to the operations and have the ability to influence and inspire change through value preposition engagements. Travel Requirement Negligible travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is not available for remote working Skills: Change control, Commissioning, start-up and handover, Conflict Management, Construction, Cost estimating and cost control (Inactive), Design development and delivery, Frameworks and methodologies, Governance arrangements, Performance management, Portfolio Management, Project and construction safety, Project execution planning, Project HSSE, Project Leadership, Project Team Management, Quality, Requirements Management, Reviews, Risk Management, Schedule and resources, Sourcing Management, Stakeholder Management, Strategy and business case, Supplier Relationship Management Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We're on the lookout for an experienced MLOps Engineer to support our growing AI/ML initiatives, including GenAI platforms, agentic AI systems, and large-scale model deployments. Experience - 7+ years Location - Pune Notice Period - Short Joiners Primary Skills - Cloud -google cloud preferred but cloud any will do ML deployment using Jenkins/Harness Python Kubernetes Terraform Key Responsibilities Build and manage CI/CD pipelines for ML model training, RAG systems, and LLM workflows Optimize GPU-powered Kubernetes environments for distributed compute Manage cloud-native infrastructure across AWS or GCP using Terraform Deploy vector databases, feature stores, and observability tools Ensure security, scalability, and high availability of AI workloads Collaborate cross-functionally with AI/ML engineers and data scientists Enable agentic AI workflows using tools like LangChain, LangGraph, CrewAI, etc. What We’re Looking For 4+ years in DevOps/MLOps/Infra Engineering, including 2+ years in AI/ML setups Hands-on with AWS SageMaker, GCP Vertex AI, or Azure ML Proficient in Python, Bash, and CI/CD tools (GitHub Actions, ArgoCD, Jenkins) Deep Kubernetes expertise and experience managing GPU infra Strong grasp on monitoring, logging, and secure deployment practices 💡 Bonus Points For 🔸 Familiarity with MLflow, Kubeflow, or similar 🔸 Experience with RAG, prompt engineering, or model fine-tuning 🔸 Knowledge of model drift detection and rollback strategies Ready to take your MLOps career to the next level? Apply now or email me on amruta.bu@peoplefy.com for more details

Posted 4 days ago

Apply

0 years

6 - 10 Lacs

Gurgaon

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications: Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 4 days ago

Apply

40.0 years

6 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Let’s do this. Let’s change the world. In this vital role you will as a Senior Associate IS Business Systems Analyst with strong data science and analytics expertise to join the Digital Workplace Experience (DWX) Automation & Analytics product team. In this role, you will develop, maintain, and optimize machine learning models, forecasting tools, and operational dashboards that support strategic and day-to-day decisions for global digital workplace services. This role is ideal for candidates with hands-on experience building predictive models and working with large operational datasets to uncover insights and deliver automation solutions. You will work alongside product owners, engineers, and service leads to deliver measurable business value using data-driven tools and techniques. Roles and Responsibilities Design, develop, and maintain predictive models, decision support tools, and dashboards using Python, R, SQL, Power BI, or similar platforms. Partner with delivery teams to embed data science outputs into business operations, focusing on improving efficiency, reliability, and end-user experience in Digital Workplace services. Build and automate data pipelines for data ingestion, cleansing, transformation, and model training using structured and unstructured datasets. Monitor, maintain, and tune models to ensure accuracy, interpretability, and sustained business impact. Support efforts to operationalize ML models by working with data engineers and platform teams on integration and automation. Conduct data exploration, hypothesis testing, and statistical analysis to identify optimization opportunities across services like endpoint health, service desk operations, mobile technology, and collaboration platforms. Provide ad hoc and recurring data-driven recommendations to improve automation performance, service delivery, and capacity forecasting. Develop reusable components, templates, and frameworks that support analytics and automation scalability across DWX. Collaborate with other data scientists, analysts, and developers to implement best practices in model development and lifecycle management. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The vital attribute professional we seek is with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years in Data Science, Computer Science, IT, or related field Must Have Skill Experience working with large-scale datasets in enterprise environments and with data visualization tools such as Power BI, Tableau, or equivalent Strong experience developing models in Python or R for regression, classification, clustering, forecasting, or anomaly detection Proficiency in SQL and working with relational and non-relational data sources Nice-to-Have Skills Familiarity with ML pipelines, version control (e.g., Git), and model lifecycle tools (MLflow, SageMaker, etc.) Understanding of statistics, data quality, and evaluation metrics for applied machine learning Ability to translate operational questions into structured analysis and model design Experience with cloud platforms (Azure, AWS, GCP) and tools like Databricks, Snowflake, or BigQuery Familiarity with automation tools or scripting (e.g., PowerShell, Bash, Airflow) Working knowledge of Agile/SAFe environments Exposure to ITIL practices or ITSM platforms such as ServiceNow Soft Skills Analytical mindset with attention to detail and data integrity Strong problem-solving and critical thinking skills Ability to work independently and drive tasks to completion Strong collaboration and teamwork skills Adaptability in a fast-paced, evolving environment Clear and concise documentation habits EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply

8.0 - 13.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Sr Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 8 to 13 years of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with bigdata technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on bigdata processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and standard processes. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for "Run" and "Build" project portfolio execution, collaborate with business partners and other IS service leads to deliver IS capability and roadmap in support of business strategy and goals. Real world data analytics, visualization and advanced technology play a vital role in supporting Amgen’s industry leading innovative Real World Evidence approaches. The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Collaborate and communicate effectively with product teams What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years' of experience in Computer Science, IT or related field Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Certified Data Scientist (preferred on Databricks or Cloud environments) Machine Learning Certification (preferred on Databricks or Cloud environments) SAFe for Teams certification (preferred) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 4 days ago

Apply

5.0 years

20 - 30 Lacs

Chennai, Tamil Nadu, India

On-site

This role is for one of Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 5 years Location: Kochi, Pune, Chennai JobType: full-time Requirements We are seeking an experienced and motivated Cloud DevOps Engineer to join our high-performing technology team. In this role, you will lead the design, development, and deployment of scalable, secure, and reliable cloud-native solutions on AWS . You will work closely with development, QA, and operations teams to manage containerized environments, implement CI/CD automation, and ensure high availability of cloud-based applications and infrastructure. This role also involves mentoring junior team members and fostering a DevOps culture across projects. Key Responsibilities: Cloud Architecture & Development Design and develop robust AWS-based serverless and containerized applications using services like Lambda, ECS, and EKS Develop and manage infrastructure as code (IaC) using Terraform or AWS CDK Create secure and cost-optimized solutions that adhere to cloud best practices. CI/CD Automation & Deployment Set up, maintain, and enhance CI/CD pipelines using GitLab CI, GitHub Actions, Jenkins, or ArgoCD Automate release workflows and infrastructure provisioning to accelerate development cycles. Monitoring & Observability Implement observability tools such as Datadog, New Relic, or Dynatrace to monitor application and infrastructure health Analyze system performance metrics and troubleshoot issues proactively. Collaboration & Mentorship Collaborate with cross-functional teams in an Agile environment to deliver production-grade solutions Participate in code reviews, architecture reviews, and mentor junior engineers. Required Skills & Qualifications: Technical Expertise Strong experience with modern programming languages such as TypeScript, Python, or Go Deep knowledge of AWS Serverless technologies and managed database services like Amazon RDS and DynamoDB Solid hands-on experience with Kubernetes, AWS EKS/ECS, or OpenShift. Tool Proficiency Expertise in developer tools such as Git, Jira, and Confluence Proficient in Terraform or AWS CDK for IaC Familiarity with Secrets Management tools like HashiCorp Vault. Soft Skills Strong problem-solving, analytical thinking, and communication skills Ability to take ownership, drive initiatives, and mentor team members. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer) or equivalent Experience with multi-cloud environments (Azure, GCP) Exposure to AI/ML services such as Amazon SageMaker or Amazon Bedrock Understanding of SSO protocols like OAuth2, OpenID Connect, or SAML Familiarity with Kafka, Amazon MSK, and Contact Center solutions like Amazon Connect Knowledge of FinOps practices for cloud cost optimization

Posted 4 days ago

Apply

4.0 years

0 Lacs

Madhya Pradesh

On-site

About Alphanext Alphanext is a global talent solutions company with offices in London, Pune, and Indore. We connect top-tier technical talent with forward-thinking organizations to drive innovation and transformation through technology. Position Summary Alphanext is hiring an AI Data Scientist with deep expertise in machine learning, deep learning, and statistical modeling. This role requires end-to-end experience in designing, developing, and deploying AI solutions in production environments. Ideal candidates will bring a strong analytical foundation, hands-on technical capability in Python/SQL, and a passion for solving complex business problems using AI. Key Responsibilities Research, design, and implement advanced machine learning and deep learning models for predictive and generative AI use cases. Apply statistical methods to ensure model interpretability, robustness, and reproducibility. Conduct large-scale data analysis to uncover trends, patterns, and opportunities for AI-based automation. Collaborate with ML engineers to validate, train, optimize, and deploy AI models into production environments. Continuously improve model performance using techniques such as hyperparameter tuning, feature engineering, and ensemble methods. Keep up to date with the latest advancements in AI, deep learning, and statistical modeling to propose innovative solutions. Translate complex analytical outputs into clear insights for both technical and non-technical stakeholders. Required Skills 4+ years of experience in machine learning and deep learning with a focus on model development, optimization, and deployment. Strong programming skills in Python and SQL . Proven experience in developing and applying mathematical/statistical models for business applications. Proficiency in data visualization tools such as Power BI . Hands-on experience in at least one of the following domains: finance, trading, biomedical modeling, image-based AI, or recommender systems. Strong understanding of statistical theory and modern AI algorithms. Excellent communication skills and the ability to explain complex models in a simple, impactful manner. Nice to Have / Preferred Skills Experience deploying AI solutions in production environments using MLOps practices. Familiarity with cloud-based AI/ML services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Exposure to compliance, data privacy, or ethical AI practices in enterprise settings. Background in applied mathematics or statistics with a focus on AI model interpretability. Qualifications Master//'s or PhD in Statistics, Mathematics, Computer Science, or a related technical field. Demonstrated track record of AI model deployment in real-world business applications.

Posted 4 days ago

Apply

5.0 years

20 - 30 Lacs

Kochi, Kerala, India

On-site

This role is for one of Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 5 years Location: Kochi, Pune, Chennai JobType: full-time Requirements We are seeking an experienced and motivated Cloud DevOps Engineer to join our high-performing technology team. In this role, you will lead the design, development, and deployment of scalable, secure, and reliable cloud-native solutions on AWS . You will work closely with development, QA, and operations teams to manage containerized environments, implement CI/CD automation, and ensure high availability of cloud-based applications and infrastructure. This role also involves mentoring junior team members and fostering a DevOps culture across projects. Key Responsibilities: Cloud Architecture & Development Design and develop robust AWS-based serverless and containerized applications using services like Lambda, ECS, and EKS Develop and manage infrastructure as code (IaC) using Terraform or AWS CDK Create secure and cost-optimized solutions that adhere to cloud best practices. CI/CD Automation & Deployment Set up, maintain, and enhance CI/CD pipelines using GitLab CI, GitHub Actions, Jenkins, or ArgoCD Automate release workflows and infrastructure provisioning to accelerate development cycles. Monitoring & Observability Implement observability tools such as Datadog, New Relic, or Dynatrace to monitor application and infrastructure health Analyze system performance metrics and troubleshoot issues proactively. Collaboration & Mentorship Collaborate with cross-functional teams in an Agile environment to deliver production-grade solutions Participate in code reviews, architecture reviews, and mentor junior engineers. Required Skills & Qualifications: Technical Expertise Strong experience with modern programming languages such as TypeScript, Python, or Go Deep knowledge of AWS Serverless technologies and managed database services like Amazon RDS and DynamoDB Solid hands-on experience with Kubernetes, AWS EKS/ECS, or OpenShift. Tool Proficiency Expertise in developer tools such as Git, Jira, and Confluence Proficient in Terraform or AWS CDK for IaC Familiarity with Secrets Management tools like HashiCorp Vault. Soft Skills Strong problem-solving, analytical thinking, and communication skills Ability to take ownership, drive initiatives, and mentor team members. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer) or equivalent Experience with multi-cloud environments (Azure, GCP) Exposure to AI/ML services such as Amazon SageMaker or Amazon Bedrock Understanding of SSO protocols like OAuth2, OpenID Connect, or SAML Familiarity with Kafka, Amazon MSK, and Contact Center solutions like Amazon Connect Knowledge of FinOps practices for cloud cost optimization

Posted 4 days ago

Apply

5.0 years

20 - 30 Lacs

Pune, Maharashtra, India

On-site

This role is for one of Weekday's clients Salary range: Rs 2000000 - Rs 3000000 (ie INR 20-30 LPA) Min Experience: 5 years Location: Kochi, Pune, Chennai JobType: full-time Requirements We are seeking an experienced and motivated Cloud DevOps Engineer to join our high-performing technology team. In this role, you will lead the design, development, and deployment of scalable, secure, and reliable cloud-native solutions on AWS . You will work closely with development, QA, and operations teams to manage containerized environments, implement CI/CD automation, and ensure high availability of cloud-based applications and infrastructure. This role also involves mentoring junior team members and fostering a DevOps culture across projects. Key Responsibilities: Cloud Architecture & Development Design and develop robust AWS-based serverless and containerized applications using services like Lambda, ECS, and EKS Develop and manage infrastructure as code (IaC) using Terraform or AWS CDK Create secure and cost-optimized solutions that adhere to cloud best practices. CI/CD Automation & Deployment Set up, maintain, and enhance CI/CD pipelines using GitLab CI, GitHub Actions, Jenkins, or ArgoCD Automate release workflows and infrastructure provisioning to accelerate development cycles. Monitoring & Observability Implement observability tools such as Datadog, New Relic, or Dynatrace to monitor application and infrastructure health Analyze system performance metrics and troubleshoot issues proactively. Collaboration & Mentorship Collaborate with cross-functional teams in an Agile environment to deliver production-grade solutions Participate in code reviews, architecture reviews, and mentor junior engineers. Required Skills & Qualifications: Technical Expertise Strong experience with modern programming languages such as TypeScript, Python, or Go Deep knowledge of AWS Serverless technologies and managed database services like Amazon RDS and DynamoDB Solid hands-on experience with Kubernetes, AWS EKS/ECS, or OpenShift. Tool Proficiency Expertise in developer tools such as Git, Jira, and Confluence Proficient in Terraform or AWS CDK for IaC Familiarity with Secrets Management tools like HashiCorp Vault. Soft Skills Strong problem-solving, analytical thinking, and communication skills Ability to take ownership, drive initiatives, and mentor team members. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer) or equivalent Experience with multi-cloud environments (Azure, GCP) Exposure to AI/ML services such as Amazon SageMaker or Amazon Bedrock Understanding of SSO protocols like OAuth2, OpenID Connect, or SAML Familiarity with Kafka, Amazon MSK, and Contact Center solutions like Amazon Connect Knowledge of FinOps practices for cloud cost optimization

Posted 4 days ago

Apply

0 years

0 Lacs

India

Remote

Job Title: Machine Learning Developer Company: Lead India Location: Remote Job Type: Full-Time Salary: ₹3.5 LPA About Lead India: Lead India is a forward-thinking organization focused on creating social impact through technology, innovation, and data-driven solutions. We believe in empowering individuals and building platforms that make governance more participatory and transparent. Job Summary: We are looking for a Machine Learning Developer to join our remote team. You will be responsible for building and deploying predictive models, working with large datasets, and delivering intelligent solutions that enhance our platform’s capabilities and user experience. Key Responsibilities: Design and implement machine learning models for classification, regression, and clustering tasks Collect, clean, and preprocess data from various sources Evaluate model performance using appropriate metrics Deploy machine learning models into production environments Collaborate with data engineers, analysts, and software developers Continuously research and implement state-of-the-art ML techniques Maintain documentation for models, experiments, and code Required Skills and Qualifications: Bachelor’s degree in Computer Science, Data Science, or a related field (or equivalent practical experience) Solid understanding of machine learning algorithms and statistical techniques Hands-on experience with Python libraries such as scikit-learn, pandas, NumPy, and matplotlib Familiarity with Jupyter notebooks and experimentation workflows Experience working with datasets using tools like SQL or Excel Strong problem-solving skills and attention to detail Ability to work independently in a remote environment Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Exposure to cloud-based ML platforms (e.g., AWS SageMaker, Google Vertex AI) Understanding of model deployment using Flask, FastAPI, or Docker Knowledge of natural language processing or computer vision What We Offer: Fixed annual salary of ₹3.5 LPA 100% remote work and flexible hours Opportunity to work on impactful, mission-driven projects using real-world data Supportive and collaborative environment for continuous learning and innovation

Posted 4 days ago

Apply

12.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Qualification BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Role Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience 10 to 18 years Job Reference Number 12895

Posted 4 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description – AI Developer (Agentic AI Frameworks, Computer Vision & LLMs) Location (Hybrid - Bangalore) About the Role We’re seeking an AI Developer who specializes in agentic AI frameworks —LangChain, LangGraph, CrewAI, or equivalents—and who can take both vision and language models from prototype to production. You will lead the design of multi‑agent systems that coordinate perception (image classification & extraction), reasoning, and action, while owning the end‑to‑end deep‑learning life‑cycle (training, scaling, deployment, and monitoring). Key Responsibilities Scope What You’ll Do Agentic AI Frameworks (Primary Focus) Architect and implement multi‑agent workflows using LangChain, LangGraph, CrewAI, or similar. Design role hierarchies, state graphs, and tool integrations that enable autonomous data processing, decision‑making, and orchestration. Benchmark and optimize agent performance (cost, latency, reliability). Image Classification & Extraction Build and fine‑tune CNN/ViT models for classification, detection, OCR, and structured data extraction. Create scalable data‑ingestion, labeling, and augmentation pipelines. LLM Fine‑Tuning & Retrieval‑Augmented Generation (RAG) Fine‑tune open‑weight LLMs with LoRA/QLoRA, PEFT; perform SFT, DPO, or RLHF as needed. Implement RAG pipelines using vector databases (FAISS, Weaviate, pgvector) and domain‑specific adapters. Deep Learning at Scale Develop reproducible training workflows in PyTorch/TensorFlow with experiment tracking (MLflow, W&B). Serve models via TorchServe/Triton/KServe on Kubernetes, SageMaker, or GCP Vertex AI. MLOps & Production Excellence Build robust APIs/micro‑services (FastAPI, gRPC). Establish CI/CD, monitoring (Prometheus, Grafana), and automated retraining triggers. Optimize inference on CPU/GPU/Edge with ONNX/TensorRT, quantization, and pruning. Collaboration & Mentorship Translate product requirements into scalable AI services. Mentor junior engineers, conduct code and experiment reviews, and evangelize best practices. Minimum Qualifications B.S./M.S. in Computer Science, Electrical Engineering, Applied Math, or related discipline. 5+ years building production ML/DL systems with strong Python & Git . Demonstrable expertise in at least one agentic AI framework (LangChain, LangGraph, CrewAI, or comparable). Proven delivery of computer‑vision models for image classification/extraction. Hands‑on experience fine‑tuning LLMs and deploying RAG solutions. Solid understanding of containerization (Docker) and cloud AI stacks (AWS/Azure). Knowledge of distributed training, GPU acceleration, and performance optimization. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Experience: AI, LLM, RAG: 4 years (Preferred) Vector database, Image classification: 4 years (Preferred) containerization (Docker): 3 years (Preferred) ML/DL systems with strong Python & Git: 3 years (Preferred) LangChain, LangGraph, CrewAI: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies