Jobs
Interviews

671 Drift Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

2 - 7 Lacs

Chennai

On-site

An Amazing Career Opportunity for AI/ML Engineer Location: Chennai, India (Hybrid) Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Who are we? HID powers the trusted identities of the world’s people, places, and things, allowing people to transact safely, work productively and travel freely. We are a high-tech software company headquartered in Austin, TX, with over 4,000 worldwide employees. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ About HID Global, Chennai HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 3,000 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com. HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai) over 200+ Engineering Staff. Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Physical Access Control Solutions (PACS) HID's Physical Access Control Solutions Business Area: HID PAC’s Business Unit focuses on the growth of new clients and existing clients where we leverage the latest card and reader technologies to solve the security challenges of our clients. Other areas of focus will include authentication, card sub systems, card encoding, Biometrics, location services and all other aspects of a physical access control infrastructure. Qualifications:- To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers. Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation : You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves, to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences. #LI-HIDGlobal

Posted 3 weeks ago

Apply

3.0 years

16 - 20 Lacs

India

Remote

Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

India

On-site

About the Company Sparsa AI is a Singapore based Industrial-AI Startup, building the next generation of agentic AI platform to transform how physical industries—such as manufacturing and logistics—make decisions and optimize their operations. Our AI agents orchestrate complex workflows across business functions and enterprise applications including ERP, MES, CRM and supply chain environments to resolve real-world constraints and unlock productivity. About the Role Sparsa AI is seeking a execution-focused Vice President of Engineering to balance vision, product strategy and delivery of its agentic AI platform. This executive role blends customer-centric product leadership with technical oversight of a multi-tenant SaaS and ML/AI stack and the engineering teams that build and manage them. The ideal candidate brings solid experience in AI product development, cloud infrastructure, and cross-functional engineering team building, with a strong focus on delivering enterprise-grade solutions. This is a high-impact opportunity to shape a category-defining platform in one of the most ambitious AI startups focused on Real Economy enterprises. This job can be located anywhere in India . Frequent travel within India as well as to locations in Europe and Asia is expected. Core Responsibilities Define, own, and continuously evolve the end-to-end product roadmap in alignment with company vision and market demand. Translate output from the Chief AI Officer-led innovation stream into deployable enterprise-ready products. Prioritize product features and initiatives based on customer needs, business impact, and technical feasibility. Product Lifecycle Leadership Own the end-to-end product lifecycle: concept → MVP → iterative releases → scale. Set and uphold delivery, quality, and performance standards across the product organization. Infrastructure & Deployment Ownership Own the architecture and operations of cloud-native infrastructure used to support product deployment and scaling. Lead development and oversight of AI/ML Ops systems ensuring robust, automated, and secure model training, testing, and deployment pipelines. Ensure alignment of product infrastructure with enterprise IT security, compliance, and integration requirements. Product-Market Fit & GTM Alignment Partner closely with the leadership team to align product strategy with GTM execution. Drive delivery success for Agentic AI solutions across our growing customer base, ensuring measurable outcomes and operational reliability. Interface with key customers and partners to understand emerging needs and drive product-market fit in targeted (real economy) industries. Qualifications 10+ years of experience in product or platform leadership , ideally in AI startups or SaaS environments. Demonstrated success delivering customer-facing software and ML/AI solutions from zero to scale. Strong ability to connect customer problems with technical solutions, and manage trade-offs. Experienced in building and leading cross-functional teams (product, engineering, cloud, MLOps in agile environments). Proven experience building and scaling multi-tenant SaaS platforms with strong observability, compliance, and performance. Deep understanding of cloud-native ML architecture, MLOps best practices (CI/CD, versioning, drift detection), and integrating third-party tools across AWS, Azure, or GCP environments. Fluent English & German language skills. A high degree of mobility and flexibility in location is preferred. Required Skills Hands-on exposure to LLM agents, orchestration frameworks (LangChain, Semantic Kernel, etc). Experience with developer platforms, agent SDKs, or enterprise integration stacks (e.g., SAP, MES, RPA). Experience with mainstream ERPs like SAP/Oracle etc. Experience in ERP and MES products like SAP, Siemens etc. Preferred Skills You are a builder and executor, not just a strategist. Thrive in ambiguity and are energized by both 0→1 and 1→N product challenges. Have deep empathy for both internal dev teams and external enterprise users. Share Sparsa’s mission to provide a Digital Workforce as a Service (DWAAS) through agentic AI. Pay range and compensation package Executive-level role at a visionary AI company with presence in Asia and Europe. High ownership, equity participation, and impact on product and company direction. Direct collaboration with the founding team (CEO, CSO, CAIO). A platform to build something transformative for global industries. Equal Opportunity Statement If you are passionate about building transformative products at the intersection of AI and industrial operations, we invite you to shape the future with us. This is your opportunity to lead Product delivery in a fast-growing company that is redefining how the real economy works. At Sparsa AI, you'll work alongside an exceptional team, solve real-world problems, and leave a lasting impact on global industries. Let’s build the future of Industrial AI-Agents—together. If you have the chops, let’s connect!

Posted 3 weeks ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Bengaluru, Karnataka, India Department Data Engineering Job posted on Jul 09, 2025 Employment type Full Time About Us MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data , you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS . You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability , while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark . Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization , enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control , and compliance (GDPR, MAS TRM) . Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities Architect scalable, cost-optimized pipelines across real-time and batch paradigms , using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS , with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack : Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems , including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls , encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain , with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts , data mesh patterns , and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases . Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger! Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

1. AI Video Creator Role Overview: Produce hyper‑realistic, luxury‑level videos using AI tools like MidJourney, RunwayML, Sora, Pika, Kling AI. Key Responsibilities: Generate high-end, brand-driven video content for real estate category. Collaborate with creative and marketing teams on campaigns, launches, editorials Author and refine prompts to ensure consistent, on‑brand visual outputs Stay current on AI and industry trends to enhance workflows Requirements: Proven portfolio in AI-assisted content within real estate Proficiency with AI tools (MidJourney, DALL·E, RunwayML) + traditional editing software (Photoshop, After Effects, Blender, etc.) Basic Python/LLM experience for creative ideation Gen‑AI video spots end‑to‑end—from prompt through compositing to final render. Highlights: Own quality control: ensure content is loop‑free, flicker‑free, “cringe‑free” Design prompt workflows, shot lists, and apply compositing, scripting, VFX Collaborate to blend AI output seamlessly with creative vision Desired Skills: Skilled in Premiere, After Effects, etc. Deep understanding of generative AI “quirks” (temporal drift, sync issues) Bonus: Python scripting for automation, or dataset curation for brand consistency Create videos using tools like Synthesia, HeyGen, Pictory, Runway, etc. Develop scripts/prompts/storyboards for AI-generated visuals Preferred Skills: Proven portfolio in AI tool‑based video production Familiarity with Adobe/Premiere or motion graphics tools Basic generative AI knowledge + optional voice‑cloning/sync experience Category Description Core Responsibilities Use generative AI to produce videos; refine prompts/scripts; post‑process outputs. Collaborations Tight integration with creative/marketing teams. Essential Skills Proficiency in AI video tools, standard editing software, prompt engineering. Bonus Skills Python, VFX, dataset management, voice‑synthesis knowledge. Formats & Types Roles vary between freelance, remote, full-time, and regional (US/India). Portfolios Required Strong, relevant AI‑driven video work is essential

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

An Amazing Career Opportunity for AI/ML Engineer Location: Chennai, India (Hybrid) Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Who are we? HID powers the trusted identities of the world’s people, places, and things, allowing people to transact safely, work productively and travel freely. We are a high-tech software company headquartered in Austin, TX, with over 4,000 worldwide employees. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ About HID Global, Chennai HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 3,000 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com . HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai) over 200+ Engineering Staff. Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Physical Access Control Solutions (PACS) HID's Physical Access Control Solutions Business Area: HID PAC’s Business Unit focuses on the growth of new clients and existing clients where we leverage the latest card and reader technologies to solve the security challenges of our clients. Other areas of focus will include authentication, card sub systems, card encoding, Biometrics, location services and all other aspects of a physical access control infrastructure. Qualifications:- To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers. Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation: You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves, to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Kochi, Kerala, India

On-site

Highly skilled Senior Machine Learning Engineer with expertise in Deep Learning, Large Language Models (LLMs), and MLOps/LLMOps to design, optimize, and deploy cutting-edge AI solutions. The ideal candidate will have hands-on experience in developing and scaling deep learning models, fine-tuning LLMs/ (e.g., GPT, Llama), and implementing robust deployment pipelines for production environments. Responsibilities Model Development & Fine-Tuning: - Design, train, fine-tune and optimize deep learning models (CNNs, RNNs, Transformers) for NLP, computer vision, or multimodal applications. - Fine-tune and adapt Large Language Models (LLMs) for domain-specific tasks (e.g., text generation, summarization, semantic similarity). - Experiment with RLHF (Reinforcement Learning from Human Feedback) and other alignment techniques. Deployment & Scalability (MLOps/LLMOps): - Build and maintain end-to-end ML pipelines for training, evaluation, and deployment. - Deploy LLMs and deep learning models in production environments using frameworks like FastAPI, vLLM, or TensorRT. - Optimize models for low-latency, high-throughput inference (eg., quantization, distillation, etc.). - Implement CI/CD workflows for ML systems using tools like MLflow, Kubeflow. Monitoring & Optimization: - Set up logging, monitoring, and alerting for model performance (drift, latency, accuracy). - Work with DevOps teams to ensure scalability, security, and cost-efficiency of deployed models. Required Skills & Qualifications: - 5-7 years of hands-on experience in Deep Learning, NLP, and LLMs. - Strong proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers, and LLM frameworks. - Experience with model deployment tools (Docker, Kubernetes, FastAPI). - Knowledge of MLOps/LLMOps best practices (model versioning, A/B testing, canary deployments). - Familiarity with cloud platforms (AWS, GCP, Azure). Preferred Qualifications: - Contributions to open-source LLM projects.

Posted 3 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

0 years

5 - 11 Lacs

Thiruvananthapuram

On-site

Required Skills We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities 1. AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. 2. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. 3. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. 4. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. 5. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. 6. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) 7. Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. 8. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Technical Skills Proficient in Python , with strong knowledge of libraries like NumPy, Pandas, SciPy, and Matplotlib for data manipulation and visualization. Expertise in TensorFlow, PyTorch, Scikit-learn, and Keras for building, training, and optimizing machine learning and deep learning models. Hands-on experience with Transformer libraries like Hugging Face Transformers, OpenAI APIs, and LangChain for NLP tasks. Practical knowledge of CNN architectures (e.g., YOLO, ResNet, VGG) and Vision Transformers (ViT) for Computer Vision applications. Proficiency in developing and deploying Diffusion Models like Stable Diffusion, SDX, and other generative AI frameworks. Experience with RLHF (Reinforcement Learning with Human Feedback) and reinforcement learning algorithms for optimizing AI behaviors. Proficiency with Docker and Kubernetes for containerization and orchestration of AI workflows. Hands-on experience with MLOps tools such as MLFlow for model tracking and CI/CD integration in AI pipelines. Expertise in setting up monitoring tools like Prometheus and Grafana to track model performance, latency, throughput, and drift. Knowledge of performance optimization techniques, such as quantization, pruning, and knowledge distillation, to improve model efficiency. Experience in building data pipelines for preprocessing, cleaning, and transforming large datasets using tools like Apache Airflow, Luigi Familiarity with cloud-based storage systems (e.g., AWS S3, Google BigQuery) for efficient data handling in AI workflows. Strong understanding of cloud platforms (AWS, GCP, Azure) for deploying and scaling AI solutions. Knowledge of advanced search technologies such as Elasticsearch for indexing and querying large datasets. Familiarity with edge deployment frameworks and optimization for resource-constrained environments Qualifications · Bachelor's or Master's degree in Data Science, Statistics, Mathematics, Computer Science, or a related field. Experience: 2.5 to 5 yrs Location: Trivandrum Job Type: Full-time Pay: ₹500,000.00 - ₹1,100,000.00 per year Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Work Location: In person

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida

Remote

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - Remote Shift timings - 6 pm to 3 am (Night Shift) Exp with Data Drift is important Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. 2. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. 3. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. 4. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor ) . Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration

Posted 3 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

Urgent Hiring || Thermal Engineer || Ghaziabad Profile: Principal Thermal Engineer Experience: Min 5+ years Salary : Upto 20 LPA(Depend on the interview) Location : Sahibabad next to Ghaziabad Key Responsibilities: Thermal System Design & Optimization: Perform advanced thermal calculations to optimize heat exchangers, cooling towers, and energy recovery systems. Develop thermodynamic models (Rankine, Organic Rankine, Brayton, Refrigeration cycles) to enhance system efficiency. Utilize CFD and FEA simulations for heat transfer, pressure drop, and flow distribution analysis. Conduct real-time performance monitoring and diagnostics for industrial thermal systems. Drive continuous improvement initiatives in thermal management, reducing energy losses. Waste Heat Recovery & Thermal Audits: Lead comprehensive thermal audits, evaluating waste heat potential and energy savings opportunities. Develop and implement waste heat recovery systems for industrial processes. Assess and optimize heat-to-power conversion strategies for enhanced energy utilization. Conduct feasibility studies for thermal energy storage and process integration. Heat Exchangers & Cooling Tower Performance: Design and analyze heat exchangers (shell & tube, plate, finned, etc.) for optimal heat transfer efficiency. Enhance cooling tower performance, focusing on heat rejection, drift loss reduction, and water treatment strategies. Oversee component selection, performance evaluation, and failure analysis for industrial cooling systems. Troubleshoot thermal inefficiencies and recommend design modifications. Material Selection & Engineering Compliance: Guide material selection for high-temperature and high-pressure thermal applications. Evaluate thermal conductivity, corrosion resistance, creep resistance, and mechanical properties. - Ensure all designs adhere to TEMA, ASME, API, CTI (Cooling Technology Institute), and industry standards. Leadership & Innovation: Lead multi-disciplinary engineering teams to develop cutting-edge thermal solutions. Collaborate with manufacturing, R&D, and operations teams for process improvement. Provide technical mentorship and training to junior engineers. Stay ahead of emerging technologies in heat transfer, renewable energy, and thermal system efficiency. Required Skills & Qualifications: Bachelor's/Master's/PhD in Mechanical Engineering, Thermal Engineering, or a related field. 10+ years of industry experience, specializing in thermal calculations, heat exchanger design, and waste heat recovery. Expertise in heat transfer, mass transfer, thermodynamics, and fluid mechanics. Hands-on experience with thermal simulation tools (ANSYS Fluent, Aspen Plus, MATLAB, COMSOL, EES). Strong background in thermal audits, cooling tower performance enhancement, and process heat recovery. Experience in industrial energy efficiency, power plant optimization, and heat recovery applications. In-depth knowledge of high-temperature alloys, corrosion-resistant materials, and structural analysis. Strong problem-solving skills with a research-driven and analytical mindset. Ability to lead projects, manage teams, and drive technical innovation. Preferred Qualifications: Experience in power plants, ORC (Organic Rankine Cycle) systems, and industrial energy recovery projects. Expertise in advanced material engineering for high-performance thermal systems. Publications or patents in heat transfer, waste heat recovery, or energy efficiency technologies. Compensation & Benefits: Competitive salary based on expertise and industry standards. Performance-based incentives and growth opportunities. Health and insurance benefits. Opportunities for leadership and R&D involvement.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Min Exp - 5-8 years Location - Remote Shift timings - 6 pm to 3 am (Night Shift) Exp with Data Drift is important Engagement & Project Overview An AI model trainer brings specialised knowledge in developing and fine-tuning machine learning models. They can ensure that your models are accurate, efficient, and tailored to your specific needs. Hiring an AI model trainer and tester can significantly enhance our data management and analytics capabilities Job Description Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor). Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Telangana, India

On-site

Ignite the Future of Language with AI at Teradata! What You'll Do: Shape the Way the World Understands Data At Teradata, we're not just managing data; we're unleashing its full potential. Our ClearScape Analytics™ platform and pioneering Enterprise Vector Store are empowering the world's largest enterprises to derive unprecedented value from their most complex data. We're rapidly pushing the boundaries of what's possible with Artificial Intelligence, especially in the exciting realm of autonomous and agentic systems We’re building intelligent systems that go far beyond automation — they observe, reason, adapt, and drive complex decision-making across large-scale enterprise environments. As a member of our AI engineering team, you’ll play a critical role in designing and deploying advanced AI agents that integrate deeply with business operations, turning data into insight, action, and measurable outcomes. You’ll work alongside a high-caliber team of AI researchers, engineers, and data scientists tackling some of the hardest problems in AI and enterprise software — from scalable multi-agent coordination and fine-tuned LLM applications, to real-time monitoring, drift detection, and closed-loop retraining systems. If you're passionate about building intelligent systems that are not only powerful but observable, resilient, and production-ready, this role offers the opportunity to shape the future of enterprise AI from the ground up. Who You'll Work With: Join Forces with the Best Imagine collaborating daily with some of the brightest minds in the company – individuals who champion diversity, equity, and inclusion as fundamental to our success. You'll be part of a cohesive force, laser-focused on delivering high-quality, critical, and highly visible AI/ML functionality within the Teradata Vantage platform. Your insights will directly shape the future of our intelligent data solutions. You'll report directly to the inspiring Sr. Manager, Software Engineering, who will champion your growth and empower your contributions. What Makes You a Qualified Candidate: Skills in Action Experience working with modern data platforms like Teradata, Snowflake, and Databricks Passion for staying current with AI research, especially in the areas of reasoning, planning, and autonomous systems. You are an excellent backend engineer who codes daily and owns systems end-to-end. Strong engineering background (Python/Java/Golang, API integration, backend frameworks) Strong system design skills and understanding of distributed systems. You’re obsessive about reliability, debuggability, and ensuring AI systems behave deterministically when needed. Hands-on experience with Machine learning & deep learning frameworks: TensorFlow, PyTorch, Scikit-learn Hands-on experience with LLMs, agent frameworks (LangChain, AutoGPT, ReAct, etc. ), and orchestration tools. Experience with AI observability tools and practices (e. g. , logging, monitoring, tracing, metrics for AI agents or ML models). Solid understanding of model performance monitoring, drift detection, and responsible AI principles. What You Bring: Passion and Potential A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field – your academic foundation is key. A genuine excitement for AI and large language models (LLMs) is a significant advantage – you'll be working at the cutting edge! Design, develop, and deploy agentic systems integrated into the data platform. 3+ years of experience in software architecture, backend systems, or AI infrastructure. Experience in software development (Python, Go, or Java preferred). Familiarity with backend service development, APIs, and distributed systems. Interest or experience in LLMs, autonomous agents, or AI tooling. Familiarity with containerized environments (Docker, Kubernetes) and CI/CD pipelines. Experience with AI observability tools and practices (e. g. , logging, monitoring, tracing, metrics for AI agents or ML models). Build dashboards and metrics pipelines to track key AI system indicators: latency, accuracy, tool invocation success, hallucination rate, and failure modes. Integrate observability tooling (e. g. , OpenTelemetry, Prometheus, Grafana) with LLM-based workflows and agent pipelines. Strong knowledge of LLMs, RL, or cognitive architectures is highly desirable. Passion for building safe, human-aligned autonomous systems. Bonus: Research experience or contributions to open-source agentic frameworks. You're knowledgeable about open-source tools and technologies and know how to leverage and extend them to build innovative solutions. Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status.

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Enqurious Enqurious (formerly Mentorskool) is a fast-growing EdTech company based in Bengaluru that specializes in upskilling industry-ready Data + AI teams. We combine skill-driven precision learning with data-driven skill intelligence to help organizations deploy talent to projects 70% faster. Our mission is to bridge the skills gap in the data + AI space through problem-first, action-oriented, and mentorship-driven learning experiences. Our clientele includes large Enterprise Data and AI consulting firms like Fractal Analytics, Tredence, Tiger Analytics and MathCo. We help our customers achieve upskilling in many different flavors including - Graduate Data Engineer Upskilling, Continuous Learning, Certifications, Hackathons and Project Accelerators. Enqurious is our home-grown Learning Engagement Platform where we put well researched learning experiences integrated with labs to help learners experience real-world problems and challenges leading to upskilling that delivers business outcomes. https://www.enqurious.com/ Position Overview Location: Bengaluru, Karnataka (Hybrid) Experience: 2+ years Employment Type: Full-time Department: AI & ML We are seeking a passionate Data Scientist with 2+ years of experience to join our core team. This role offers a unique opportunity to work on cutting-edge data science projects and contribute to shaping the next generation of data professionals through developing domain first use cases and delivered via mentoring Must-Have Requirements Technical Skills SQL Mastery: Advanced proficiency in SQL with experience in complex query optimization, performance tuning, and working with large datasets and relational data modelling Statistical & ML Foundations: Strong grasp of probability, statistics, hypothesis testing, and ML algorithms (supervised, unsupervised, and time-series). Python Programming: Proficiency with pandas, NumPy, scikit-learn, TensorFlow/PyTorch, and data-validation/automation scripting. Data Visualisation: Ability to craft compelling stories with Matplotlib, Seaborn, Plotly Cloud Fundamentals: Solid understanding of cloud computing principles, Familiarity with ML services on AWS (SageMaker, Athena), or Azure (ML Studio, Databricks). Experience & Mindset - Must have Minimum 2 years of hands-on data science experience with a proven track record of building models and MLOps pipelines Model Development & Deployment: Experience building robust, scalable MLops on cloud platforms, preferably AWS (S3, Sagemaker) or Azure(Adls, Databricks, MLflow, Azure ML Studio) Continuous Learning: Demonstrated ability to quickly adapt to new technologies, frameworks, and methodologies in the rapidly evolving data landscape Teaching/Mentoring Aptitude : Genuine interest and willingness to occasionally conduct corporate training sessions, workshops Ability to work in uncertain and ambiguous environments. We expect you to be a self-starter, and you will be required to demonstrate this skill in the interview Good to Have Startup Mindset: Openness to work closely with core and founding team members in creating RFPs, proposals, and strategic technical documents Client Interaction: Experience interfacing with clients to understand requirements and translate business needs into technical solutions Key Responsibilities Core Engineering (for experience building and creating simulated projects inspired by the real world) Design, build, and maintain reproducible ML pipelines (feature engineering, training, evaluation, and CI/CD deployment). Develop predictive, prescriptive, and generative models that are performant, explainable, and cost-efficient. Implement data-quality checks, bias/variance monitoring, and automated drift detection. Conduct mentoring sessions on data science best practices, tools, and technologies Develop hands-on labs and real-world scenarios for upskilling programs Contribute to Enqurious's knowledge base and learning content library Strategic Contributions Participate in technical discussions with the founding team and contribute to content roadmap decisions Assist in creating technical proposals, RFPs, and solution architectures for enterprise clients for their upskilling needs Stay updated with industry trends and emerging technologies to enhance our training offerings Represent Enqurious at technical conferences, meetups, and community events What We Offer Professional Growth Accelerated Learning Environment: Work alongside industry experts and gain exposure to diverse data science challenges across multiple domains (Retail, CPG, E-Commerce, FinTech, Insurance) Mentoring Opportunities: Develop your communication and leadership skills through corporate training and mentoring Industry Recognition: Opportunity to build your brand in the data science community Certification Support: Access to premium training resources and certification programs (Databricks, AWS, Azure) Work Environment Flexible Work Arrangements: Hybrid working model with collaborative office environment in Bengaluru Innovation Culture: Be part of a team that's disrupting traditional corporate learning methodologies Direct Impact: Your work will directly influence how thousands of data scientists are trained globally Startup Agility: Fast-paced environment with opportunities to wear multiple hats and drive initiatives Compensation & Benefits Competitive salary commensurate with experience and skills Performance-based bonuses and equity opportunities Health insurance and wellness programs Professional development budget for conferences, courses, and certifications Ideal Candidate Profile You're the perfect fit if you: Love building impactful models and sharing knowledge Thrive in ambiguous, fast-paced environments where you can make a significant impact Enjoy collaborating with diverse teams, including educators, business stakeholders, and technical experts We are excited about the intersection of technology and education Have a growth mindset and are energized by continuous learning and teaching Can balance hands-on technical work with strategic thinking and planning Application Process 1-2 Interview rounds. Please note that we are only willing to accept anyone who has a notice period of 30 days or less How to Apply? If you're excited about the opportunity to build cutting-edge data solutions while shaping the future of Data Science + AI education, we'd love to hear from you. Apply with your resume, a brief cover letter explaining your interest in this unique role, and any relevant project portfolios or certifications. Send your updated resume to learn@enqurious.com with the subject line "Application for Data Scientist - ." Enqurious is an equal opportunity employer committed to diversity and inclusion. We encourage applications from all qualified candidates regardless of race, gender, age, religion, sexual orientation, or disability status.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Are you interested in bringing your technical expertise to projects? Are you a detail-oriented paralegal with a 'can do' attitude? About Our Team LexisNexis Legal & Professional, which serves customers in more than 150 countries with 11,300 employees worldwide, is part of RELX, a global provider of information-based analytics and decision tools for professional and business customers. About The Role This position performs moderate research, design, and software development assignments within a specific software functional area or product line. Responsibilities Design, train, and optimize machine learning models for various applications such as predictive analytics, natural language processing (NLP), computer vision, or recommendation systems. Deploy ML models into production environments using cloud platforms (AWS) or on-premises infrastructure. Collect, preprocess, and analyze large datasets to extract meaningful insights. Collaborate with data engineers to build robust data pipelines for model training and evaluation. Develop innovative algorithms to solve complex problems Continuously improve model performance through hyperparameter tuning, feature engineering, and experimentation. Work closely with architects, engineering managers, product managers, software developers, and domain experts to integrate AI/ML solutions into existing products and workflows. Translate business requirements into technical specifications and actionable AI strategies. Stay updated with the latest advancements in AI/ML research and tools. Experiment with state-of-the-art techniques and frameworks to enhance solution capabilities. Monitor deployed models for accuracy, bias, and drift over time. Implement mechanisms for retraining and updating models as needed. Requirements Proficiency in programming languages: Python, Java. Must have worked in Generative AI and proficient in any LLM Model, LangChain Experience with AI/ML libraries and frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, etc. Strong understanding of statistical analysis, deep learning, and neural networks. Familiarity with big data tools: Hadoop, Spark, or Apache Flink. Hands-on experience with cloud services such as AWS ECS, EKS, EC2 etc Excellent problem-solving and analytical thinking abilities. Strong communication skills to explain complex concepts to non-technical stakeholders. Ability to work collaboratively in a fast-paced, agile environment. Experience with NLP for text representation, Information Extraction, semantic extraction techniques, data structures and modeling Knowledge of DevOps practices for ML (MLOps). Must have worked in Generative AI and proficient in any LLM Model Familiarity with containerization tools like Docker and Kubernetes. Contributions to open-source AI/ML projects or publications in relevant conferences/journals. Work in a way that works for you We promote a healthy work/life balance across the organisation. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working for you Benefits We know that your wellbeing and happiness are key to a long and successful career. These are some of the benefits we are delighted to offer: Comprehensive Health Insurance: Covers you, your immediate family, and parents. Enhanced Health Insurance Options: Competitive rates negotiated by the company. Group Life Insurance: Ensuring financial security for your loved ones. Group Accident Insurance: Extra protection for accidental death and permanent disablement. Flexible Working Arrangement: Achieve a harmonious work-life balance. Employee Assistance Program: Access support for personal and work-related challenges. Medical Screening: Your well-being is a top priority. Modern Family Benefits: Maternity, paternity, and adoption support. Long-Service Awards: Recognizing dedication and commitment. New Baby Gift: Celebrating the joy of parenthood. Subsidized Meals in Chennai: Enjoy delicious meals at discounted rates. Various Paid Time Off: Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. Free Transport pick up and drop from the home -office - home (applies in Chennai) About The Business LexisNexis Legal & Professional® provides legal, regulatory, and business information and analytics that help customers increase their productivity, improve decision-making, achieve better outcomes, and advance the rule of law around the world. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis® and Nexis® services.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai

On-site

Software Engineer II Are you interested in bringing your technical expertise to projects? Are you a detail-oriented paralegal with a 'can do' attitude? About our Team LexisNexis Legal & Professional, which serves customers in more than 150 countries with 11,300 employees worldwide, is part of RELX, a global provider of information-based analytics and decision tools for professional and business customers. About the Role This position performs moderate research, design, and software development assignments within a specific software functional area or product line. Responsibilities Design, train, and optimize machine learning models for various applications such as predictive analytics, natural language processing (NLP), computer vision, or recommendation systems. Deploy ML models into production environments using cloud platforms (AWS) or on-premises infrastructure. Collect, preprocess, and analyze large datasets to extract meaningful insights. Collaborate with data engineers to build robust data pipelines for model training and evaluation. Develop innovative algorithms to solve complex problems Continuously improve model performance through hyperparameter tuning, feature engineering, and experimentation. Work closely with architects, engineering managers, product managers, software developers, and domain experts to integrate AI/ML solutions into existing products and workflows. Translate business requirements into technical specifications and actionable AI strategies. Stay updated with the latest advancements in AI/ML research and tools. Experiment with state-of-the-art techniques and frameworks to enhance solution capabilities. Monitor deployed models for accuracy, bias, and drift over time. Implement mechanisms for retraining and updating models as needed. Requirements Proficiency in programming languages: Python, Java. Must have worked in Generative AI and proficient in any LLM Model, LangChain Experience with AI/ML libraries and frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, etc. Strong understanding of statistical analysis, deep learning, and neural networks. Familiarity with big data tools: Hadoop, Spark, or Apache Flink. Hands-on experience with cloud services such as AWS ECS, EKS, EC2 etc Excellent problem-solving and analytical thinking abilities. Strong communication skills to explain complex concepts to non-technical stakeholders. Ability to work collaboratively in a fast-paced, agile environment. Experience with NLP for text representation, Information Extraction, semantic extraction techniques, data structures and modeling Knowledge of DevOps practices for ML (MLOps). Must have worked in Generative AI and proficient in any LLM Model Familiarity with containerization tools like Docker and Kubernetes. Contributions to open-source AI/ML projects or publications in relevant conferences/journals. Work in a way that works for you We promote a healthy work/life balance across the organisation. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working for you We know that your wellbeing and happiness are key to a long and successful career. These are some of the benefits we are delighted to offer: Comprehensive Health Insurance: Covers you, your immediate family, and parents. Enhanced Health Insurance Options: Competitive rates negotiated by the company. Group Life Insurance: Ensuring financial security for your loved ones. Group Accident Insurance: Extra protection for accidental death and permanent disablement. Flexible Working Arrangement: Achieve a harmonious work-life balance. Employee Assistance Program: Access support for personal and work-related challenges. Medical Screening: Your well-being is a top priority. Modern Family Benefits: Maternity, paternity, and adoption support. Long-Service Awards: Recognizing dedication and commitment. New Baby Gift: Celebrating the joy of parenthood. Subsidized Meals in Chennai: Enjoy delicious meals at discounted rates. Various Paid Time Off: Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. Free Transport pick up and drop from the home -office - home (applies in Chennai) About the Business LexisNexis Legal & Professional® provides legal, regulatory, and business information and analytics that help customers increase their productivity, improve decision-making, achieve better outcomes, and advance the rule of law around the world. As a digital pioneer, the company was the first to bring legal and business information online with its Lexis® and Nexis® services. We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: EEO Know Your Rights .

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Machine Learning Engineer – Applied AI & Scalable Model Deployment Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in machine learning engineering or applied data science roles Apply at : careers@darwix.ai Subject Line : Application – Machine Learning Engineer – [Your Name] About Darwix AI Darwix AI is India’s leading GenAI SaaS platform powering real-time sales enablement and conversational intelligence for large enterprise teams. Our products— Transform+ , Sherpa.ai , and Store Intel —support revenue teams in BFSI, retail, real estate, and healthcare by delivering multilingual voice analysis, real-time AI nudges, agent coaching, and in-store behavioral analytics. Darwix AI is redefining how large-scale human interactions drive revenue outcomes. As we expand rapidly across India, MENA, and Southeast Asia, we are strengthening our core ML engineering team to accelerate new feature development and production deployments. Role Overview As a Machine Learning Engineer , you will design, build, and operationalize robust ML models for real-time and batch processing workflows across Darwix AI’s product suite. Your work will span conversational intelligence, voice and text analytics, predictive scoring, and decision-support systems. You will collaborate closely with AI research engineers, backend teams, and product managers to translate business problems into scalable and maintainable ML pipelines. This is a hands-on, impact-first role focused on turning advanced ML models into production systems used by large enterprise teams daily. Key ResponsibilitiesModel Development & Training Design, build, and optimize models for tasks such as classification, scoring, topic detection, and conversation summarization. Work on feature engineering pipelines, data preprocessing, and large-scale training on structured and unstructured datasets. Evaluate model performance using robust metrics (accuracy, recall, precision, WER for voice tasks). Deployment & Productionization Package and deploy models as scalable APIs and microservices integrated with core product workflows. Optimize inference pipelines for latency, throughput, and cost in production environments. Work closely with DevOps and backend engineers to ensure robust CI/CD, monitoring, and auto-recovery workflows. Data & Pipeline Engineering Develop and maintain data pipelines to ingest, clean, transform, and label large volumes of voice and text data. Implement logging, data versioning, and audit trails to ensure traceable and reproducible experiments. Monitoring & Continuous Improvement Build automated evaluation frameworks to detect model drift and performance degradation. Analyze live production data to identify opportunities for iterative improvements and fine-tuning. Contribute to A/B testing design for model-driven features to validate business impact. Collaboration & Documentation Work with cross-functional teams to gather requirements, define success criteria, and drive end-to-end feature implementation. Maintain clear technical documentation for data flows, model architectures, and deployment processes. Mentor junior engineers on best practices in ML system design and operationalization. Required Skills & Qualifications 2–6 years of experience in ML engineering, applied ML, or data science with a strong focus on production systems. Proficiency in Python , including experience with ML libraries such as PyTorch, TensorFlow, Scikit-learn, or Hugging Face. Solid understanding of data preprocessing, feature engineering, and ML model lifecycle management. Experience deploying models as REST APIs or microservices in cloud or containerized environments. Strong knowledge of relational and NoSQL databases, and familiarity with data pipeline tools. Good understanding of MLOps concepts, including CI/CD for ML, model monitoring, and A/B testing. Preferred Qualifications Exposure to speech or voice analytics , including speech-to-text systems and audio signal processing. Familiarity with large language models (LLMs), embeddings, or retrieval-augmented generation (RAG) pipelines. Experience with distributed training, GPU optimization, or large-scale batch inference. Knowledge of vector databases (FAISS, Pinecone) and real-time recommendation systems. Prior experience in SaaS product environments targeting enterprise clients. Success in This Role Means Models integrated into production systems delivering measurable improvements to business KPIs. High availability, low-latency inference pipelines powering real-time features for large enterprise users. Rapid iteration cycles from model conception to production deployment. Strong, well-documented, and reusable ML infrastructure supporting ongoing product and feature launches. You Will Excel in This Role If You Are passionate about building ML systems that create real business impact, not just offline experiments. Enjoy working with noisy, multilingual, and large-scale datasets in high-stakes settings. Love solving engineering challenges involved in scaling AI solutions to thousands of enterprise users. Thrive in a fast-paced, ownership-driven environment where ideas translate quickly to live features. Value documentation, reproducibility, and collaboration as much as technical depth. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – Machine Learning Engineer – [Your Name] (Optional): Include links to your GitHub, published papers, blog posts, or a short note on a real-world ML system you helped deploy and what challenges you overcame. This is a unique opportunity to join the core engineering team at one of India’s most innovative GenAI startups and shape how enterprise teams leverage AI for real-time decision-making and revenue growth. If you are ready to build AI at scale, Darwix AI wants to hear from you.

Posted 3 weeks ago

Apply

7.0 years

40 Lacs

Faridabad, Haryana, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

7.0 years

40 Lacs

Greater Hyderabad Area

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role name: Automation Test Lead (AI/ML) Years of exp: 5 - 8 yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients, we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who: Are proactive, curious adaptable, and patient Shape the company's vision and will have a direct impact on its success. Have the opportunity for fast career growth. Have the opportunity to participate in the upside of an ultra-growth venture. Have fun 🙂 Don’t apply if: You want to work on a single layer of the application. You prefer to work on well-defined problems. You need clear, pre-defined processes. You prefer a relaxed and slow paced environment. Role Overview As an Automation Test Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX), backend APIs, and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy, fairness, stability, and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps).

Posted 3 weeks ago

Apply

0 years

0 Lacs

Bhubaneshwar, Odisha, India

On-site

Company Description DRIFT MEDIA is a digital marketing and design company founded in 2022, focused on providing high-quality content, design, and digital marketing services. Offering services such as 2D/3D graphic design, motion video, animation, digital marketing, SMO, SEO, PPC, growth strategies, and app development, we prioritize client satisfaction and customer interaction for business growth. Co-founded by Aditya Prasad Das and Swati Smita Patra, we believe in strategy and execution to build trust in a dynamic world. Role Description This is a full-time on-site Social Media Manager role located in Bhubaneshwar at DRIFT MEDIA. The Social Media Manager will be responsible for managing social media marketing campaigns, creating engaging content, implementing content strategies, optimizing social media presence, and creating written communication for the company. Qualifications Social Media Marketing and Social Media Optimization (SMO) skills Strong communication and writing abilities Experience in developing content strategies Knowledge of SEO and PPC strategies Ability to work collaboratively in a team environment Bachelor's degree in Marketing, Communications, or related field Hiring Creative Minds Only! Position:- Project Manager (Social Media) Experience:- 2yr+ Salary:- Industry Standards Hike + Incentives Location:- Patia, Bhubaneswar, Odisha. Work Mode:- Work from Office (Because we believe in team building) .. We are a growing team that enables the members to share their decisions and suggestions on the projects to even cater more growth for the business or clients. .. Disclaimer:- We only appreciate super creative people on the team. Apply For Job:- Send your CV- contact.driftmedia@gmail.com Give a Call- 7735664732

Posted 3 weeks ago

Apply

0.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka

On-site

At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0150071 Date posted 07/07/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About the Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills and Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies