Jobs
Interviews

1552 Sagemaker Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Ahmedabad

On-site

Job Description: We are looking for a highly skilled AI/ML Engineer who can design, implement, and optimize machine learning solutions, including traditional models, deep learning architectures, and generative AI systems. The ideal candidate will have strong hands-on experience with MLOps, data pipelines, and LLM optimization. You will collaborate with data engineers and cross-functional teams to develop scalable, ethical, and high-performance AI/ML solutions that drive business impact. Requirements: Key Responsibilities: Develop, implement, and optimize AI/ML models using both traditional machine learning and deep learning techniques. Design and deploy generative AI models for innovative business applications. Collaborate with data engineers to build and maintain high-quality data pipelines and preprocessing workflows. Integrate responsible AI practices to ensure ethical, explainable, and unbiased model behavior. Develop and maintain MLOps workflows to streamline training, deployment, monitoring, and continuous integration of ML models. Optimize large language models (LLMs) for efficient inference, memory usage, and performance. Work closely with product managers, data scientists, and engineering teams to integrate AI/ML into core business processes. Conduct rigorous testing, validation, and benchmarking of models to ensure accuracy, reliability, and robustness. Stay abreast of the latest research and advancements in AI/ML, LLMs, MLOps, and generative models. Required Skills & Qualifications: Strong foundation in machine learning, deep learning, and statistical modeling techniques. Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML frameworks. Proficient in Python and ML engineering tools such as MLflow, Kubeflow, or SageMaker. Experience deploying generative AI solutions, including text, image, or audio generation. Understanding of responsible AI concepts, including fairness, accountability, and transparency in model building. Solid experience with MLOps pipelines and continuous delivery of ML models. Proficiency in optimizing transformer models or LLMs for production workloads. Familiarity with cloud services (AWS, GCP, Azure) and containerized deployments (Docker, Kubernetes). Excellent problem-solving and communication skills. Ability to work collaboratively with cross-functional teams. Preferred Qualifications: Experience with data versioning tools like DVC or LakeFS. Exposure to vector databases and retrieval-augmented generation (RAG) pipelines. Knowledge of prompt engineering, fine-tuning, and quantization techniques for LLMs. Familiarity with Agile workflows and sprint-based delivery. Contributions to open-source AI/ML projects or published papers in conferences/journals. About Company / Benefits: Lucent Innovation is a company 8 years old, India's premier IT solutions provider company that offers web & web application development services to global clients. We are Shopify Expert and Shopify Plus Partner, have its office registered in India. Lucent Innovation has a highly skilled team of IT professionals. We ensure that our employees have a work-life balance. In order to achieve the same, We work 5 days working, no night shift and employees are encouraged to report to the office on time and leave on time. The company organizes several indoor/outdoor activities throughout the year. Besides these, the company organizes trips for employees. Celebrations are an integral part of our work culture. We celebrate all major festivals like Diwali, Holi, Lohri, Christmas day, Navratra (Dandia), Makar Sankranti etc. We also enjoy several other important occasions like New Year, Independence Day, Republic Day, Women's Day, Mother's day, etc. Perks: - 5days working Flexible working hours No hidden policy Friendly working Environment In-house training Quarterly and Yearly rewards & Appreciation

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

Role: Senior Data Scientist Work Type: Full Time Work Mode: Hybrid / Remote Work Location: Bangalore (If Hybrid) Work Shift: 4PM-12AM IST We are hiring a skilled and innovative Data Scientist for an exciting Conversational AI Bot project. We’re seeking someone with deep knowledge of AI/ML fundamentals, strong hands-on experience with modern NLP tech stacks, and the ability to build and deploy Small Language Models (SLMs). Responsibilities: 8 - 10yrs of Experience into AI/ML along with strong proficiency in Python. Design, develop, and optimize Conversational AI Bots using advanced NLP techniques Build and fine-tune Small Language Models (SLMs) and integrate them into conversational flows Use frameworks like LangChain, LlamaIndex, or similar to develop memory-aware, context-rich dialogue systems Implement Retrieval-Augmented Generation (RAG) and embedding techniques for intelligent responses Deploy custom-trained models on AWS (e.g., SageMaker, Lambda) or Azure (e.g., AKS, ML Studio) Proficient with libraries such as Hugging Face Transformers, PyTorch, TensorFlow Experience with MLOps, model monitoring, and continuous improvement of conversational performance

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Ghaziabad, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Noida, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

Agra, Uttar Pradesh, India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Associate Project Manager – AI/ML Experience: 8+ years (including 3+ years in project management) Notice Period: Immediate to 15 days Location: Coimbatore / Chennai 🔍 Job Summary We are seeking experienced Associate Project Managers with a strong foundation in AI/ML project delivery. The ideal candidate will have a proven track record of managing cross-functional teams, delivering complex software projects, and driving AI/ML initiatives from conception to deployment. This role requires a blend of project management expertise and technical understanding of machine learning systems, data pipelines, and model lifecycle management. ✅ Required Experience & Skills 📌 Project Management Minimum 3+ years of project management experience, including planning, tracking, and delivering software projects. Strong experience in Agile, Scrum, and SDLC/Waterfall methodologies. Proven ability to manage multiple projects and stakeholders across business and technical teams. Experience in budgeting, vendor negotiation, and resource planning. Proficiency in tools like MS Project, Excel, PowerPoint, ServiceNow, SmartSheet, and Lucidchart. 🤖 AI/ML Technical Exposure (Must-Have) Exposure to AI/ML project lifecycle: data collection, model development, training, validation, deployment, and monitoring. Understanding of ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and data platforms (e.g., Azure ML, AWS SageMaker, Databricks). Familiarity with MLOps practices, model versioning, and CI/CD pipelines for ML. Experience working with data scientists, ML engineers, and DevOps teams to deliver AI/ML solutions. Ability to translate business problems into AI/ML use cases and manage delivery timelines. 🧩 Leadership & Communication Strong leadership, decision-making, and organizational skills. Excellent communication and stakeholder management abilities. Ability to influence and gain buy-in from executive sponsors and cross-functional teams. Experience in building and maintaining relationships with business leaders and technical teams. 🎯 Roles & Responsibilities Lead AI/ML and software development projects from initiation through delivery. Collaborate with data science and engineering teams to define project scope, milestones, and deliverables. Develop and maintain detailed project plans aligned with business goals and technical feasibility. Monitor progress, manage risks, and ensure timely delivery of AI/ML models and software components. Coordinate cross-functional teams and ensure alignment between business, data, and engineering stakeholders. Track project metrics, ROI, and model performance post-deployment. Ensure compliance with data governance, security, and ethical AI standards. Drive continuous improvement in project execution and delivery frameworks. Stay updated on AI/ML trends and contribute to strategic planning for future initiatives.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

THIS JOB ROLE IS ONLY FOR PEOPLE WITH DISABILITIES. Job Title : AM – Analytics Job Type : Permanent, Full-time Function: Information Technology Location : Mumbai, Maharashtra, India About the role As an Assistant Manager – Analytics, you will be responsible for delivering actionable, data-driven insights across multiple functions and geographies. You will manage and execute analytical projects, collaborate with internal stakeholders, and contribute to enhancing the organization’s data capabilities to support strategic and operational decision-making. Key Responsibilities 1. End-to-end project development: - Manage the conceptualization, development, and execution of data science projects across various functions and geographies, ensuring alignment with business objectives and strategies 2. Collaboration: -Foster effective collaboration with internal stakeholders such as marketing, sales, supply chain, and finance to identify data-driven opportunities, address business challenges, and deliver actionable insights 3. Vendor management - Engage with external vendors and partners to leverage specialized expertise, tools, and resources for advanced analytics projects, ensuring quality deliverables within established timelines and budgets 4. Performance monitoring: - Establish metrics and KPIs to assess the performance and impact of data science initiatives, tracking progress against goals and recommending adjustments as necessary to optimize outcomes 5. Continuous improvement: - Stay abreast of industry trends, emerging technologies, and best practices in data science, actively seeking opportunities to enhance the company's analytical capabilities and drive innovation Education: A Bachelor or Master's degree in a quantitative field such as Computer Science, Statistics, Mathematics, Economics, or related disciplines Experience: Minimum 4 to 6 years of experience in data science, preferably within the FMCG industry or related sectors Skills: Must have: Proficient in programming languages such as Python, R, or SQL, with hands-on experience in statistical analysis, machine learning, data visualization, and predictive modelling techniques Strong analytical and problem-solving skills, with the ability to interpret complex data sets, extract actionable insights, and translate findings into business recommendations Excellent verbal and written communication skills, with the ability to effectively convey technical concepts to non-technical stakeholders and influence decision-making at all levels of the organization Sound understanding of FMCG business dynamics, consumer behavior, market trends, and competitive landscape, coupled with a strategic mindset and commercial awareness Proven ability to thrive in a fast-paced and dynamic environment, managing multiple priorities and stakeholders while maintaining a focus on delivering high-quality results Team player with the capability to collaborate across departments, geographies, and cultures with a proactive attitude towards process improvement and the ability to drive change. Good to have: Familiarity with MLops (Machine Learning Operations) principles and practices, including model deployment, monitoring, versioning, and automation, to ensure scalability, reliability, and performance of machine learning models in production environments Experience using cloud platforms such as Microsoft Azure for machine learning, including familiarity with cloud-based ML services (e.g., Azure Machine Learning, AWS SageMaker, Google AI Platform) and proficiency in deploying and managing machine learning workflows in a cloud environment.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Purpose GMR seeks a visionary, technically skilled leader to transform our SSC into a Global Capability Centre (GCC). This role will align data and AI strategies across GMR’s businesses—airports, energy, infrastructure—to enable enterprise-wide intelligence and automation. ORGANISATION CHART Key Accountabilities Accountabilities Strategy & Vision: Create GMR’s data and AI roadmap. Develop models and governance frameworks to expand from SSC to GCC focusing on data and AI services. Data & AI Architecture: Lead cloud-native data platform development with IT, scaling and maintaining AI/ML applications. Design a unified platform for data engineers and analysts to process and analyse large-scale data. (Experience with Kafka, Snowflake, DBT, Google pub/sub or similar is beneficial.) Agentic AI & Automation: Deploy Agentic AI systems for autonomous workflows and integrate RPA tools like UiPath and Automation Anywhere for intelligent automation. AI/ML Ops: Establish ML Ops frameworks for lifecycle governance and real-time AI deployment globally. Governance & Security: Ensure data governance, compliance (GDPR, ISO 27001), lineage, access controls, and security with the group CISO. ROI: Achieve measurable business impact through insights, APIs, and cost-saving analytics. Talent & Ecosystem: Build a high-performing team and create innovation partnerships with vendors, academia, and AI communities. KEY ACCOUNTABILITIES - Additional Details EXTERNAL INTERACTIONS Consulting and Management Services provider IT Service Providers / Analyst Firms Vendors INTERNAL INTERACTIONS GCFO and Finance Council, Procurement council, IT council, HR Council (GHROC) GCMO/ BCMO FINANCIAL DIMENSIONS None Other Dimensions None Education Qualifications  Masters in engineering/computer science; preferred certifications include Databricks, PostgreSQL, or cloud platforms. Relevant Experience  10-15 years' experience in data architecture, cloud, AI/ML, and enterprise automation.  Expert in GenAI, LLM orchestration (LangChain, AutoGPT, haystack, Llamaindex), MLOps, and RPA platforms.  Proven success in scaling AI with clear ROI impact in complex environments.  Strong business acumen to influence senior stakeholders.  Core technical & ml tools knowledge (languages , frameworks, platforms etc.)  Python, SQL, Hugging Face Transformers, Tensorflow, Pytorch, XGBoost, platforms like Google autoML, Amazon sagemaker, azure ML, Google vertex.  Knowledge of Ipaas platforms, ERP and CRM integrations, Data engineering and storage, Databricks, snowflake, big query and similar  Bonus capabilities – LLM Ops. Edge AI, Simulation and synthetic data, Privacy aware AI with encryption. Compliance frameworks.  Effective team builder and leader for AI and ML engineers. Aim to grow the AI division to match global standards within 3 years. COMPETENCIES Team Leadership Strategic Leadership Entrepreneurship Breakthrough Thinking Developing self & others Empowering others

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a highly experienced and motivated Backend Solution Architect, you will be responsible for leading the design and implementation of robust, scalable, and secure backend systems. Your expertise in Node.js and exposure to Python will be crucial in architecting end-to-end backend solutions using microservices and serverless frameworks. You will play a key role in ensuring scalability, maintainability, and security, while also driving innovation through the integration of emerging technologies like AI/ML. Your primary responsibilities will include designing and optimizing backend architecture, managing AWS-based cloud solutions, integrating AI/ML components, containerizing applications, setting up CI/CD pipelines, designing and optimizing databases, implementing security best practices, developing APIs, monitoring system performance, and providing technical leadership and collaboration with cross-functional teams. To be successful in this role, you should have at least 8 years of backend development experience with a minimum of 4 years as a Solution/Technical Architect. Your expertise in Node.js, AWS services, microservices, event-driven architectures, Docker, Kubernetes, CI/CD pipelines, authentication/authorization mechanisms, and API development will be critical. Additionally, hands-on experience with AI/ML workflows, React, Next.js, Angular, and AWS Solution Architect Certification will be advantageous. At TechAhead, a global digital transformation company, you will have the opportunity to work on cutting-edge AI-first product design thinking and bespoke development solutions. By joining our team, you will contribute to shaping the future of digital innovation worldwide and driving impactful results with advanced AI tools and strategies.,

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience - Experience programming with at least one software programming language AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. At AWS AI, we want to make it easy for our customers to train their deep learning workload in the cloud. With Amazon SageMaker, AWS is building customer-facing services to empower data scientists and software engineers in their deep learning endeavors. As our customers rapidly adopt LLMs and Generative AI for their business, we’re building the next-generation AI platform to accelerate their development. We’re seeking a dedicated engineering team lead to drive building our next-generation AI compute platform that’s optimized for LLMs and distributed training. As an SDE, you will be responsible for designing, developing, testing, and deploying distributed machine learning systems and large-scale solutions for our world-wide customer base. In this, you will collaborate closely with a team of ML scientists and customers to influence our overall strategy and define the team’s roadmap. You'll assist in gathering and analyzing business and functional requirements, and translate requirements into technical specifications for robust, scalable, supportable solutions that work well within the overall system architecture. You will also drive the system architecture, spearhead best practices that enable a quality product, and help coach and develop junior engineers. A successful candidate will have an established background in engineering large scale software systems, a strong technical ability, great communication skills, and a motivation to achieve results in a fast paced environment. Key job responsibilities As a Software Development Engineer in the SageMaker team, you will be responsible for: - Developing innovative solutions for supporting Large Language Model training in a cluster of nodes; - Develop and maintain a performant, resilient and fully-managed service built to train large-scale foundation models. - Optimizing distributed training by profiling, identifying bottlenecks and addressing them by improving compute and network performance, as well as finding opportunities for better compute/communication overlap; - You will serve as a key technical resource in the full development cycle, from conception to delivery and maintenance. - You will own delivery of entire piece of the system and serve as technical lead on complex projects using best practice engineering standards - Hire/mentor junior development engineers A day in the life Every day will bring new and exciting challenges on the job while you: * Build and improve next-generation AI platform using Kubernetes as orchestration layer. * Collaborate with internal engineering teams, leading technology companies around the world and open source community - PyTorch, NVIDIA/GPU * Create innovative products to run at scale on the AI platform, and see them launched in high volume production About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. About AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Itanagar, Arunachal Pradesh, India

On-site

Roles And Responsibilities Develop scalable data pipelines for machine learning workflows. Work with large datasets, AWS Glue, Lambda, and S3 for ETL tasks. Integrate AI/ML tools and frameworks like SageMaker and PyTorch. Ensure secure, compliant, and optimized data operations. Collaborate with data scientists and DevOps teams. Improve model outputs using RAG and vector database and Qualifications : 8+ years of experience in AWS data engineering. Proficient in Python/Scala and cloud-native data tools. Experience with LLMs, ML workflows, and re-ranking techniques. Deep understanding of AWS services and AI/ML pipeline integration. Familiarity with SageMaker, Comprehend, and Entity Resolution. Strong analytical, documentation, and presentation skills. Experience in core AWS infrastructure and IAM policies (ref:hirist.tech)

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are looking for a skilled and passionate AI/ML Engineer to join our team and contribute to designing, developing, and deploying scalable machine learning models and AI solutions. The ideal candidate will have hands-on experience with data preprocessing, model building, evaluation, and deployment, with a strong foundation in mathematics, statistics, and software development. Key Responsibilities Design and implement machine learning models to solve business problems. Collect, preprocess, and analyze large datasets from various sources. Build, test, and optimize models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Deploy ML models using cloud services (AWS, Azure, GCP) or edge platforms. Collaborate with data engineers, data scientists, and product teams. Monitor model performance and retrain models as necessary. Stay up to date with the latest research and advancements in AI/ML. Create documentation and reports to communicate findings and model Skills & Qualifications : Bachelor's/Masters degree in Computer Science, Data Science, AI/ML, or related field. 2+ years of hands-on experience in building and deploying ML models. Proficiency in Python (preferred), R, or similar languages. Experience with ML/DL frameworks such as TensorFlow, PyTorch, Scikit-learn, XGBoost. Strong grasp of statistics, probability, and algorithms. Familiarity with data engineering tools (e.g., Pandas, Spark, SQL). Experience in model deployment (Docker, Flask, FastAPI, MLflow, etc.). Knowledge of cloud-based ML services (AWS SageMaker, Azure ML, GCP AI Platform). (ref:hirist.tech)

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Job Title : AI/ML Engineer Job Summary We are seeking a talented and passionate AI/ML Engineer with at least 3 years of experience to join our growing data science and machine learning team. The ideal candidate will have hands-on experience in building and deploying machine learning models, data preprocessing, and working with real-world datasets. You will collaborate with cross-functional teams to develop intelligent systems that drive business value. Key Responsibilities Design, develop, and deploy machine learning models for various business use cases. Analyze large and complex datasets to extract meaningful insights. Implement data preprocessing, feature engineering, and model evaluation pipelines. Work with product and engineering teams to integrate ML models into production environments. Conduct research to stay up to date with the latest ML and AI trends and technologies. Monitor and improve model performance over time. Required Qualifications Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. Minimum 3 years of hands-on experience in building and deploying machine learning models. Strong proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, and XGBoost. Experience with training, fine-tuning, and evaluating ML models in real-world applications. Proficiency in Large Language Models (LLMs) including experience using or fine-tuning models like .BERT, GPT, LLaMA, or open-source transformers. Experience with model deployment, serving ML models via REST APIs or microservices using frameworks like FastAPI, Flask, or TorchServe. Familiarity with model lifecycle management tools such as MLflow, Weights & Biases, or Kubeflow. Understanding of cloud-based ML infrastructure (AWS SageMaker, Google Vertex AI, Azure ML, etc.). Ability to work with large-scale datasets, perform feature engineering, and optimize model performance. Strong communication skills and the ability to work collaboratively in cross-functional teams. (ref:hirist.tech)

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

Remote

See all the jobs at Srijan Technologies PVT LTD here: Senior/Lead Data Scientist Location: Gurugram, Haryana, India Department: Technology Type: Full-time | Partially remote Apply by: No close date About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation, and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners. Job Summary We are seeking an experienced Data Scientist III to join our Expert Tagging team. This role is responsible for developing, optimizing, and supporting machine learning (ML) models that power expert and expertise recommendations. The ideal candidate will have expertise in ML model development, NLP, recommendation systems, and cloud-based ML pipelines within an AWS-based environment. Key Responsibilities Design, develop, and optimize ML models for expert and expertise recommendation using state-of-the-art techniques. Implement Natural Language Processing (NLP) algorithms for entity recognition, topic modeling, and semantic search. Develop and maintain ML pipelines for model training, evaluation, deployment, and monitoring in AWS. Work with structured and unstructured data, performing feature engineering and data preprocessing to improve model accuracy. Deploy and manage ML models using AWS SageMaker, Lambda, Step Functions, and API Gateway. Collaborate with data engineers and software developers to integrate ML models into production applications. Implement A/B testing and model performance evaluation metrics to ensure optimal model effectiveness. Optimize ML models for scalability, performance, and cost-efficiency in a cloud environment. Ensure ML solutions adhere to security, privacy, and compliance best practices. Stay up to date with emerging trends in ML, AI, and data science, continuously improving models and methodologies. What We Offer Professional Development and Mentorship. Hybrid work mode with a remote-friendly workplace. (6 times in a row Great Place To Work Certified). Health and Family Insurance. 40 Leaves per year along with maternity & paternity leaves. Wellness, meditation, and counseling sessions.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for a driven individual with financial knowledge and analytical mindset. The candidate should be a motivated team player who can maintain efficiency and accuracy when multitasking. To be a strong candidate for this role, the key here will be experience in financial services and proven understanding of products. along with this, a strong written and verbal communicator to be able to interact with CSU/Field RPs. Key Responsibilities Working with Surveillance internal teams and business partners to define and document business requirements Engage Business counterparts to ensure solutions are appropriate as per business requirement and level of readiness Translating business requirements into Solutions Perform and deliver on complex ad-hoc business analysis requests Translate analytic output into understandable and actionable business knowledge Coordinate and prioritize business needs in a matrix management environment Document and communicate results and recommendations to external and internal teams Required Qualifications 4-6 years of experience in analytics industry Financial Services Experience Required Strong quantitative/analytical/programming and problem-solving skills Excellent knowledge of MS Excel, Power point and Word Highly motivated self-starter with excellent verbal and written communication skills Ability to work effectively in a team environment on multiple projects and drive results through direct and in-direct influence Candidate should be willing to learn tools like Python, SQL, PowerApps & PowerBI Series 7 or SIE preferred Preferred Qualifications Experience with AWS Infrastructure with experience on and knowledge of tools like SageMaker and Athena Python programming, SQL and data manipulation skills About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Legal Affairs

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Join us and drive the design and deployment of AI/ML frameworks revolutionizing telecom services. As a key member of our team, you will architect and build scalable, secure AI systems for service assurance, orchestration, and fulfillment, working directly with network experts to drive business impact. You will be responsible for defining architecture blueprints, selecting the right tools and platforms, and guiding cross-functional teams to deliver scalable AI systems. This role offers significant growth potential, mentorship opportunities, and the chance to shape the future of telecoms using the latest AI technologies and platforms. Key Responsibilities HOW YOU WILL CONTRIBUTE AND WHAT YOU WILL LEARN Design end-to-end AI architecture tailored to telecom services business functions (e.g., Service assurance, Orchestration and Fulfilment). Define data strategy and AI workflows including Inventory Model, ETL, model training, deployment, and monitoring. Evaluate and select AI platforms, tools, and frameworks suited for telecom-scale workloads for development and testing of Inventory services solutions Work closely with telecom network experts and Architects to align AI initiatives with business goals. Ensure scalability, performance, and security in AI systems across hybrid/multi-cloud environments. Mentor AI developers Key Skills And Experience You have: 10+ years' experience in AI/ML design and deployment with a Graduation or equivalent degree. Practical Experience on AI/ML techniques and scalable architecture design for telecom operations, inventory management, and ETL. Exposure to data platforms (Kafka, Spark, Hadoop), model orchestration (Kubeflow, MLflow), and cloud-native deployment (AWS Sagemaker, Azure ML). Proficient in programming (Python, Java) and DevOps/MLOps best practices. It will be nice if you had: Worked with any of the LLM models (llama family) and LLM agent frameworks like LangChain / CrewAI / AutoGen Familiarity with telecom protocols, OSS/BSS platforms, 5G architecture, and NFV/SDN concepts. Excellent communication and stakeholder management skills. About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal consultant- Data Engineer In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing ML models into production on a variety of cloud platforms Analyze and resolve architectural problems, working closely with engineering, data science and operations teams. Design and develop data pipelines: Create efficient data pipelines to collect, process, and store large volumes of data from various sources. Implement data solutions: Develop and implement scalable data solutions using technologies like Hadoop, Spark, and SQL databases. Ensure data quality: Monitor and improve data quality by implementing validation processes and error handling. Collaborate with teams: Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize performance: Continuously optimize data systems for performance, scalability, and cost-effectiveness. Experience in GenAI project Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and ML tooling - Sagemaker , ML Studio Execution Paradigm - low latency/Streaming, batch Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) Visualization Tools - PowerBI , Tableau Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, AI Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system . Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate AI engineers with GenAI . Responsibilities Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products Close the gap between AI research and production to create ground-breaking new products, features and solve problems for our customers with GenAI Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services Build and implement machine learning models and prototype solutions for proof-of-concept Scale existing AI models into production on a variety of cloud platforms with GenAI Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field Master&rsquos degree is a plus Integration - APIs, micro- services and ETL/ELT patterns DevOps (Good to have) - Ansible, Jenkins, ELK Containerization - Docker, Kubernetes etc Orchestration - Airflow, Step Functions, Ctrl M etc Languages and scripting: Python, Scala Java etc Cloud Services - AWS, GCP, Azure and Cloud Native Analytics and AI tooling - Sagemaker , GenAI Execution Paradigm - low latency/Streaming, batch Ensure GenAI outputs are contextually relevant, Familiarity with Generative AI technologies, Design and Implement GenAI Solutions Collaborate with service line teams to design, implement and manage Gen-AI solution Preferred Qualifications/ Skills Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery , Snowflake etc.) AI and GenAI Tools Certifications in AI/ML or GenAI Familiarity with generative models, prompt engineering, and fine-tuning techniques to develop innovative AI solutions. Designing, developing, and implementing solutions tailored to meet client needs. Understanding business requirements and translating them into technical solutions using GEN AI Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Panaji, Goa, India

On-site

About the Project We are seeking a brilliant and innovative Data Scientist to join the team building "a Stealth Prop-tech Startup," a groundbreaking digital real estate platform in Dubai. This is a complex initiative to build a comprehensive ecosystem integrating long-term sales, short-term stays, and advanced technologies including AI/ML, data analytics, Web3/blockchain, and conversational AI. You will be at the heart of our intelligence engine, transforming vast datasets into the predictive models and insights that will define our competitive edge. This is a pivotal role in a high-impact project, offering the chance to work on challenging problems in the PropTech space and see your models directly influence the user experience and business strategy. Job Summary As a Data Scientist, you will be responsible for designing, developing, and deploying the machine learning models that power the platform's most innovative features. You will work on everything from creating a proprietary property valuation model ("TruValue UAE") to building a sophisticated recommendation engine and forecasting market trends. You will collaborate closely with backend engineers, product managers, and business stakeholders to leverage our unique data assets, driving personalization, market intelligence, and strategic decision-making across the platform. Key Responsibilities Design, train, and deploy machine learning models for the "TruValue UAE" Automated Valuation Model (AVM) to predict property values. Develop and implement a personalization and recommendation engine to suggest relevant properties to users based on their behavior and preferences. Analyze large, complex datasets to identify key business insights, user behavior patterns, and real estate market trends. Build predictive models to forecast metrics such as user churn, rental yield, and neighborhood demand dynamics. Collaborate with the backend engineering team to integrate ML models into the production environment via scalable APIs. Work with the product team to define data-driven hypotheses and conduct experiments to improve platform features. Communicate complex findings and the results of analyses to non-technical stakeholders through clear visualizations and reports. Contribute to the design and development of the big data infrastructure and MLOps pipelines. Required Skills and Experience 3-5+ years of hands-on experience as a Data Scientist, with a proven track record of building and deploying machine learning models in a production environment. A Master’s degree or PhD in a quantitative field such as Computer Science, Statistics, Mathematics, or Engineering. Expert proficiency in Python and its data science ecosystem (e.g., Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch). Strong practical knowledge of various machine learning techniques, including regression, classification, clustering, and recommendation systems. Advanced SQL skills and experience working with relational databases (e.g., PostgreSQL). Experience with data visualization tools (e.g., Matplotlib, Seaborn, Tableau). Preferred Qualifications Experience in the PropTech (Property Technology) or FinTech sectors is highly desirable. Direct experience building Automated Valuation Models (AVMs) or similar price prediction models. Experience working with cloud-based data platforms and ML services (e.g., AWS SageMaker, Google AI Platform, BigQuery, Redshift). Familiarity with MLOps principles and tools for model deployment and monitoring. Experience with time-series analysis and forecasting. Experience with Natural Language Processing (NLP) techniques.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Panaji, Goa, India

On-site

About the Project We are seeking a highly skilled and pragmatic AI/ML Engineer to join the team building "a Stealth Prop-tech startup," a groundbreaking digital real estate platform in Dubai. This is a complex initiative to build a comprehensive ecosystem integrating long-term sales, short-term stays, and advanced technologies including AI/ML, data analytics, Web3/blockchain, and conversational AI. You will be responsible for operationalizing the machine learning models that power our most innovative features, ensuring they are scalable, reliable, and performant. This is a crucial engineering role in a high-impact project, offering the chance to build the production infrastructure for cutting-edge AI in the PropTech space. Job Summary As an AI/ML Engineer, you will bridge the gap between data science and software engineering. You will be responsible for taking the models developed by our data scientists and deploying them into our production environment. Your work will involve building robust data pipelines, creating scalable training and inference systems, and developing the MLOps infrastructure to monitor and maintain our models. You will collaborate closely with data scientists, backend developers, and product managers to ensure our AI-driven features are delivered efficiently and reliably to our users. Key Responsibilities Design, build, and maintain scalable infrastructure for training and deploying machine learning models at scale. Operationalize ML models, including the "TruValue UAE" AVM and the property recommendation engine, by creating robust, low-latency APIs for production use. Develop and manage data pipelines (ETL) to feed our machine learning models with clean, reliable data for both training and real-time inference. Implement and manage the MLOps lifecycle, including CI/CD for models, versioning, monitoring for model drift, and automated retraining. Optimize the performance of machine learning models for speed and cost-efficiency in a cloud environment. Collaborate with backend engineers to seamlessly integrate ML services with the core platform architecture. Work with data scientists to understand model requirements and provide engineering expertise to improve model efficacy and feasibility. Build the technical backend for the AI-powered chatbot, integrating it with NLP services and the core platform data. Required Skills and Experience 3-5+ years of experience in a Software Engineering, Machine Learning Engineering, or related role. A Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Strong software engineering fundamentals with expert proficiency in Python. Proven experience deploying machine learning models into a production environment on a major cloud platform (AWS, Google Cloud, or Azure). Hands-on experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Experience building and managing data pipelines using tools like Apache Airflow, Kubeflow Pipelines, or cloud-native solutions. Collaborate with cross-functional teams to integrate AI solutions into products. Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker) and orchestration (Kubernetes). Preferred Qualifications Experience in the PropTech (Property Technology) or FinTech sectors is highly desirable. Direct experience with MLOps tools and platforms (e.g., MLflow, Kubeflow, AWS SageMaker, Google AI Platform). Familiarity with big data technologies (e.g., Spark, BigQuery, Redshift). Experience building real-time machine learning inference systems. Strong understanding of microservices architecture. Experience working in a collaborative environment with data scientists.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

🚀 Why Headout? We're a rocketship: 9-figure revenue, record growth, and profitable With $130M in revenue, guests in 100+ cities, and 18 months of profitability, Headout is the fastest-growing marketplace in the travel industry, and we're just getting started. We've raised $60M+ from top-tier investors and are building a durable company for the long term — because that's what our mission needs and deserves. We're growing, profitable and nowhere near done. What we do is important In an increasingly digital world, there is a desperate need to augment our human experience by getting us to interact with the real world around us and the people in it. At Headout, our mission is to be the easiest, fastest, and most delightful way to head out to a real-life experience — from immersive tours to museums to live events and everything in between. Why now? The foundation is strong. The opportunity ahead is even bigger. We've hit profitability, built momentum, and proven the model — but there's so much more to build. If you're looking to join a company where the trajectory is steep and your impact is real, this is the moment. Our culture Reinventing the travel industry isn't easy, but that's the fun part. We care deeply about ownership, craft, and impact, and we're here to do the best work of our careers. We won't pretend like it's for everyone but if you're a builder who loves solving tough problems, you'll feel right at home. Read more about our unique values here: https://bit.ly/HeadoutPlaybook 👩‍💻 The Role As a Machine Learning Engineer at Headout, you will play a pivotal role in developing AI-powered solutions that enhance our platform and create exceptional experiences for travelers worldwide. At Headout, we firmly believe that intelligent algorithms can transform how people discover and engage with travel experiences. Collaborating closely with multifaceted teams across the organization, you'll design, develop, and deploy sophisticated ML models across various applications including recommendations, search optimization, pricing, and operational efficiency. 🌟 What makes the role stand out? Global Impact: Your algorithms will serve millions of travelers across 190+ countries, optimizing experiences throughout the customer journey - from discovery and decision-making to post-purchase engagement. Diverse AI Applications: Work on a variety of machine learning projects spanning different domains. One day you might be improving our recommendation engine, the next you could be optimizing search rankings or developing forecasting models for operational planning. End-to-End Ownership: Take ML solutions from ideation to production. You'll help identify opportunities where ML can add value, design solutions, implement models, and measure real-world impact. Data-Rich Environment: Leverage rich, multi-dimensional data from user behavior, transaction patterns, content characteristics, and operational metrics to build comprehensive models that drive meaningful outcomes. Tangible Results: See the concrete impact of your work through key business metrics. Your models will contribute to increased conversion rates, enhanced user engagement, optimized operations, and improved customer satisfaction. Technical Innovation: As machine learning and AI technologies evolve, you'll be at the forefront of evaluating and implementing new approaches that keep Headout competitive in a dynamic industry. 🎯 What skills you need to have You have a minimum of 4 years of experience in machine learning engineering across different applications such as recommendations, classification, prediction, or natural language processing A strong foundation in ML fundamentals and techniques is essential. Proficiency in Python is a must, with experience in frameworks like TensorFlow, PyTorch, scikit-learn, or similar ML libraries (Hugging Face, XGBoost etc) You have practical experience taking ML models from development to production, including data preprocessing, feature engineering, model training, evaluation, and deployment Experience with A/B testing and experimentation frameworks to measure and validate the impact of ML solutions You possess strong problem-solving skills and the ability to translate business requirements into effective technical implementations Your communication skills enable you to work effectively with cross-functional teams and explain complex technical concepts to non-technical stakeholders Familiarity with large-scale data processing technologies (Spark, Kafka, Flink etc.) and cloud-based ML services (Vertex AI, Sagemaker etc) is a plus EEO statement At Headout, we don't just accept differences — we celebrate it, we support it, and we thrive on it for the benefit of our employees, our partners, and the community at large. Headout provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age or disability. During the interview process, if you need assistance or an accommodation due to a disability, you may contact the recruiter assigned to your application or email us at life@headout.com. Privacy policy Please note that once you apply for this job profile your personal data will be retained for a period of one (1) year. Headout shall process this data for recruitment purposes only. Once the relevant job profile is filled or once the time period of one (1) year from the date of the job application has passed, whichever is later, Headout shall either delete your data or inform you that it shall keep it in its database for future roles. In compliance with the relevant privacy laws, you have the right to request access to your personal data, to request that your personal data be rectified or erased, and to request that the processing of your personal data be restricted. If you have any concerns or questions about the way Headout handle your data, you can contact our Data Protection Officer for more information.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a talented and versatile Analytics & AI Specialist to join our dynamic team. This role combines expertise in General Analytics, Artificial Intelligence (AI), Generative AI (GenAI), forecasting techniques, and client management to deliver innovative solutions that drive business success. The ideal candidate will work closely with clients, leverage AI technologies to enhance data-driven decision-making, and apply forecasting models to predict business trends and outcomes. AI & Machine Learning: Experience with machine learning frameworks and libraries (e.g., TensorFlow, scikit-learn, PyTorch, Keras). Knowledge of Generative AI (GenAI) tools and technologies, including GPT models, GANs (Generative Adversarial Networks), and transformer models. Familiarity with AI cloud platforms (e.g., Google AI, AWS SageMaker, Azure AI). Forecasting: Expertise in time series forecasting methods (e.g., ARIMA, Exponential Smoothing, Prophet) and machine learning-based forecasting models. Experience applying predictive analytics and building forecasting models for demand, sales, and resource planning. Data Visualization & Reporting: Expertise in creating interactive reports and dashboards with tools like Tableau, Power BI, or Google Data Studio. Ability to present complex analytics and forecasting results in a clear and compelling way to stakeholders. Client Management & Communication: Strong client-facing skills with the ability to manage relationships and communicate complex technical concepts to non-technical audiences. Ability to consult and guide clients on best practices for implementing AI-driven solutions. Excellent written and verbal communication skills for client presentations, technical documentation, and report writing. Additional Skills: Project Management: Experience managing data analytics projects from inception to completion, ensuring deadlines and objectives are met. Cloud Platforms: Experience with cloud platforms (AWS, GCP, Azure) for deploying AI models, handling large datasets, and performing distributed computing. Business Acumen: A strong understanding of business KPIs and the ability to align AI and analytics projects with client business goals.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: AI Lead – Generative AI & ML Systems Key Responsibilities Generative AI Development Design and implement LLM-powered solutions and generative AI models for use cases such as predictive analytics, automation workflows, anomaly detection, and intelligent systems. · RAG & LLM Applications Build and deploy Retrieval-Augmented Generation (RAG) pipelines, structured generation systems, and chat-based assistants tailored to business operations. Full AI Lifecycle Management Lead the complete AI lifecycle—from data ingestion and preprocessing to model design, training, testing, deployment, and continuous monitoring. · Optimization & Scalability Develop high-performance AI/LLM inference pipelines, applying techniques like quantization, pruning, batching, and model distillation to support real-time and memory-constrained environments. MLOps & CI/CD Automation Automate training and deployment workflows using Terraform, GitLab CI, GitHub Actions, or Jenkins, integrating model versioning, drift detection, and compliance monitoring. Cloud & Deployment Deploy and manage AI solutions using AWS, Azure, or GCP with containerization tools like Docker and Kubernetes. AI Governance & Compliance Ensure model/data governance and adherence to regulatory and ethical standards in production AI deployments. Stakeholder Collaboration Work cross-functionally with product managers, data scientists, and engineering teams to align AI outputs with real-world business goals. Required Skills & Qualifications Bachelor’s degree (B.Tech or higher) in Computer Science, IT, or a related field is required. 8-12 Year exp- from the Ai team with overall experience in Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) solution development. Minimum 2+ years of hands-on experience in Generative AI and LLM-based solutions, including prompt engineering, fine-tuning, Retrieval-Augmented Generation (RAG) pipelines with full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Proven expertise in both open-source and proprietary Large Language Models (LLMs), including LLaMA, Mistral, Qwen, GPT, Claude, and BERT. Expertise in C/C++ & Python programming with relevant ML/DL libraries including TensorFlow, PyTorch, and Hugging Face Transformers. Experience deploying scalable AI systems in containerized environments using Docker and Kubernetes. Deep understanding of the MLOps/LLMOps lifecycle, including model versioning, deployment automation, performance monitoring, and drift detection. Familiarity with CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins) and DevOps for ML workflows. Working knowledge of Infrastructure-as-Code (IaC) tools like Terraform for cloud resource provisioning and reproducible ML pipelines. Hands-on experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Designed and documented High-Level Design (HLD) and Low-Level Design (LLD) for ML/GenAI systems, covering data pipelines, model serving, vector search, and observability layers. Documentation included component diagrams, network architecture, CI/CD workflows, and tabulated system designs. Provisioned and managed ML infrastructure using Terraform, including compute clusters, vector databases, and LLM inference endpoints across AWS, GCP, and Azure. Experience beyond notebooks: shipped models with logging, tracing, rollback mechanisms, and cost control strategies. Hands-on ownership of production-grade LLM workflows, not limited to experimentation. Full CI/CD integration, monitoring, and observability pipelines, with 100% independent contribution. Preferred Qualifications (Good To Have) Experience with LangChain, LlamaIndex, AutoGen, CrewAI, OpenAI APIs, or building modular LLM agent workflows. Exposure to multi-agent orchestration, tool-augmented reasoning, or Autonomous AI agents and agentic communication patterns with orchestration. Experience deploying ML/GenAI systems in regulated environments, with established governance, compliance, and Responsible AI frameworks. Familiarity with AWS data and machine learning services, including Amazon SageMaker, AWS Bedrock, ECS/EKS, and AWS Glue, for building scalable, secure data pipelines and deploying end-to-end AI/ML workflows.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are seeking an experienced Cloud Delivery engineer to work horizontally across our organization, collaborating with Cloud Engineering, Cloud Operations, and cross-platform teams. This role is crucial in ensuring that cloud resources are delivered according to established standards, with a focus on both Azure and AWS platforms. The Cloud Delivery engineer will be responsible for delivery of Data and AI platforms. Responsibilities Seeking a talented AWS artificial intelligence specialist with following skills. Provision the cloud resources, ensuring they adhere to approved architecture and organizational standards on both Azure and AWS. Collaborate closely with Cloud Engineering, Cloud Operations, and cross-platform teams to ensure seamless delivery of cloud resources on both Azure and AWS. Architecting, Designing, Developing and Implementing AI models and algorithms to address business challenges and improve processes. Experience in implementing Security Principles and Guardrails to AI Infrastructure. Identify and mitigate risks associated with cloud deployments and resource management in multi-cloud environments. Collaborating with cross-functional teams of data scientists, software developers, and business stakeholders to understand requirements and translate them into AI solutions. Create and maintain documentation for AI models, algorithms as Knowledge base article Participate in capacity planning and cost optimization initiatives for multi-cloud resources. Experience working with Vector DB (Datastax HCD) Conduct experiments to test and compare the effectiveness of different AI approaches. Troubleshooting and resolving issues related to AI systems. Deploying AI solutions into production environments and ensuring their integration with existing systems. Monitoring and evaluating the performance of AI systems, adjusting as necessary to improve outcomes Research and stay updated on the latest AI and machine learning technology advancements. Present findings and recommendations to stakeholders, including technical and non-technical audiences. Providing technical expertise and guidance on AI-related projects and initiatives. Expereince in creating Deployments for Intelligent Search, Intelligent Document Processing, Media Intelligence, Forecasting, AI for DevOps, Identity Verification, Content Moderation Experience in Amazon Bedrock, SageMaker, All Foundational AWS Resources under Compute, Networking, Security, App Runner, Lambda Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field; Master's degree preferred. 8+ years of experience in IT, with at least 4 years focused on cloud technologies, including substantial experience with both AWS & Azure. Strong understanding of AWS and Azure services, architectures, and best practices, particularly in Data and AI platforms. Certifications in both AWS (e.g., AWS Certified Solutions Architect - Professional) Azure (e.g., Azure Solutions Architect Expert). Experience in working with multiple teams cloud platforms. Demonstrated ability to work horizontally across different teams and platforms. Strong knowledge of cloud security principles and compliance requirements in multi-cloud environments. working experience of DevOps practices and tools applicable to both Azure and AWS. Experience with infrastructure as code (e.g., ARM templates, CloudFormation, Terraform). Proficiency in scripting languages (e.g., PowerShell, Bash, Python). Solid understanding of networking concepts and their implementation in Azure and AWS. Preferred: Cloud Architecture/specialist. Experience with hybrid cloud architectures. Familiarity with containerization technologies (e.g., Docker, Kubernetes) on both Azure and AWS.

Posted 3 weeks ago

Apply

40.0 years

4 - 10 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-219181 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jul. 08, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking an experienced and visionary individual to play a pivotal role in internal software development at Amgen India. This role is critical in driving the strategy, development, and implementation of software solutions on the global commercial side. You will be responsible for setting strategic direction, clearly defining operations, delivering reusable software solutions for business and engineering teams, and ensuring the successful adoption of internal platforms across Amgen. The successful candidate will lead a team of engineers, product managers, and architects to deliver software applications that enhance our products and services. Roles & Responsibilities: The primary responsibilities of this key leadership position will include but are not limited to the following: Develop strategic vision for software platform services in alignment with the company’s overall strategy. Provide support to the Amgen Technology Executive Leadership and oversee the development of a Community of Practice for software Platforms. Foster a culture of innovation, identify and implement software solutions that drive value to our stakeholders Ensure the adoption of best practices and latest advancements in technologies across functions and business. Drive the design, development and deployment of scalable software platforms and reusable accelerators that enable and increase the value of application and product teams across the enterprise. Ensure the security, reliability of software platforms and seamless integration with existing systems. Drive the software platform capabilities implementation, ensuring timely delivery within scope and budget. Collaborate with cross functional teams to understand demand and develop solutions to meet business needs. Develop and enforce governance frameworks to manage the usage and adoption of software platforms. Lead and mentor a team of Engineers and Architects and foster a culture of continuous development and learning. Monitor team performance and present updates to executive leadership and key stakeholders. Functional Skills: Must-Have Skills: 18 to 23 years of experience in full stack software engineering, cloud computing with a robust blend of technical expertise, strategic thinking and leadership abilities focusing on software development. Demonstrated experience in managing large-scale technology projects and teams with a track record of delivering innovative and impactful solutions. Hands on experience with latest framework and libraries, such as LangChain, llamaindex, Agentic framework, vectorDB, LLM, Experienced with CICD DevOps/MLOps. Hands on experience with cloud computing services, such as AWS Lambda, container technology, SQL, NoSQL databases, API Gateway, SageMaker, Bedrock, etc. Good-to-Have Skills: Proficient in Python, JavaScript, SQL; Hands on experience with full stack software development, NoSQL database, docker container, container orchestration system, automated testing, and CICD DevOps Build a high performing team of software development experts, foster a culture of innovation, and ensure employee growth and satisfaction to drive long-term organizational success Identify opportunities for process improvements and drive initiatives to enhance the efficiency of the development lifecycle. Stay updated with the latest industry trends and advancements in software technology, provide strategic leadership, and explore new opportunities for innovation. Be an interdisciplinary team leader who is innovative, accountable, reliable, and able to thrive in a constantly evolving environment. Facilitate technical discussions and decision-making processes within the team. Preferred Professional Certifications Cloud Platform certification (AWS, Azure, GCP), specialized in solution architect, DevOps Platform certification (AWS, Azure, GCP, Databricks) Soft Skills: Exceptional communication and people skills to effectively manage stakeholder relationships and build new partnerships. Excellent verbal and written communication skills/writing skills; active listening skills; attention to detail. Strong process/business writing skills. Experience in people management and passion for mentorship, culture and fostering the development of talent. Ability to translate business and stakeholder feedback into accurate and efficient processes using clear language and format. Strong analytic/critical-thinking and decision-making abilities. Must be flexible and able to manage multiple activities and priorities with minimal direction in a rapidly changing and demanding environment. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies