Jobs
Interviews

671 Drift Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - DevOps Engineer Responsibilities Designing and building infrastructure to support our AWS services and infrastructure. Creating and utilizing tools to monitor our applications and services in the cloud, including system health indicators, trend identification, and anomaly detection. Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud. Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance. Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime. Qualifications Bachelor’s degree in CS or ECE. 3+ years of experience in a DevOps Engineer role. Strong experience in public cloud platforms (AWS, Azure, GCP), provisioning and managing core services (S3, EC2, RDS, EKS, ECR, EFS, SSM, IAM, etc.), with a focus on cost governance and budget optimization Proven skills in containerization and orchestration using Docker, Kubernetes (EKS/AKS/GKE), and Helm Familiarity with monitoring and observability tools such as SigNoz, OpenTelemetry, Prometheus, and Grafana Adept at designing and maintaining CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, Bitbucket pipelines, Nexus/Artifactory, and SonarQube to accelerate and secure releases Proficient in infrastructure-as-code and GitOps provisioning with technologies like Terraform, OpenTofu, Crossplane, AWS CloudFormation, Pulumi, Ansible, and ArgoCD Experience with cloud storage solutions and databases: S3, Glacier, PostgreSQL, MySQL, DynamoDB, Snowflake, Redshift Strong communication skills, translating complex technical and analytical content into clear, actionable insights for stakeholders Preferred Qualifications Experience with advanced IaC and GitOps frameworks: OpenTofu, Crossplane, Pulumi, Ansible, and ArgoCD Exposure to serverless and event-driven workflows (AWS Lambda, Step Functions) Experience operationalizing AI/ML workloads and intelligent agents (AWS SageMaker, Amazon Bedrock, canary/blue-green deployments, drift detection) Background in cost governance and budget management for cloud infrastructure Familiarity with Linux system administration at scale

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

Role Overview EXL is seeking a seasoned Model Risk Management professional with hands-on expertise in model development, validation, and performance monitoring aligned with regulatory and internal standards. The ideal candidate will contribute to strengthening EXL's analytics capabilities across financial, risk, and operational models. Key Responsibilities - Model Development - Design, develop, and implement statistical or machine learning models for risk, operations, or finance domains. - Document model assumptions, methodology, and limitations in accordance with internal MRM guidelines. - Model Validation & Review - Conduct independent validations of models used across business units: performance testing, data integrity assessments, and sensitivity analyses. - Evaluate conceptual soundness, accuracy, and robustness of model outputs. - Prepare validation reports, highlighting key risks and suggesting mitigation strategies. - Model Monitoring & Compliance** - Design frameworks for continuous performance monitoring (e.g., backtesting, stability tracking, drift detection). - Ensure compliance with regulatory standards such as SR 11-7, OCC 2011-12, and any local regulatory frameworks. - Collaborate with internal stakeholders to uphold governance standards across the model lifecycle. Requirements - Advanced degree in Quantitative fields (e.g., Statistics, Mathematics, Economics, Engineering, Data Science). - 6+ years of experience in model development, validation, or MRM governance. - Proficiency in tools such as Python, R, SAS, or similar analytical software. - Familiarity with regulatory guidance and risk modeling frameworks. - Strong documentation and stakeholder engagement skills. Preferred Skills - Exposure to BFSI models: credit risk, market risk, fraud, or customer segmentation. - Experience working in hybrid or remote team environments. - Certifications like FRM, CFA, or equivalent are a plus.

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

The Market Development Programs team at Red Hat is seeking a skilled Reporting Analyst to enhance our analytics and reporting capabilities, supporting global and regional stakeholders across the Market Development organization. In this pivotal role, you will create and manage Salesforce reports, CRM Analytics (CRMA) analytical reports, and Tableau dashboards, delivering insightful data for strategic decision-making and operational excellence. Operating in a dynamic, results-driven environment, you'll work closely with cross-functional teams and senior leadership to provide accurate, actionable insights crucial for Quarterly Business Reviews (QBRs), regional meetings, and other essential reporting needs. This role reports directly to the Manager of Market Development Programs and offers an opportunity to significantly influence the organization's analytical practices and performance reporting. What Will You Do Design, develop, and maintain comprehensive Salesforce reports to support day-to-day business operations and performance measurement Create advanced CRM Analytics (CRMA) reports and dashboards, providing detailed analysis to facilitate informed decision-making at global and regional levels Develop interactive and insightful Tableau dashboards that clearly visualize key performance metrics and trends Collaborate closely with stakeholders across various regions and functions to gather requirements and deliver tailored reporting solutions Provide analytical support and key data insights for Quarterly Business Reviews (QBRs), regional meetings, and ad-hoc reporting requests Ensure data accuracy, consistency, and timeliness across all reporting deliverables, maintaining the highest standards of quality Identify opportunities to streamline reporting processes, enhance efficiency, and drive continuous improvement within reporting and analytics practices Act as a trusted advisor to stakeholders, proactively identifying trends, insights, and recommendations based on data analysis What Will You Bring Bachelor’s degree in Business, Analytics, Information Systems, or a related field Minimum of 3-5 years experience in a reporting or analytics role, preferably within sales, marketing, or related operational teams Proven expertise in creating and managing Salesforce reports and dashboards Solid experience with CRM Analytics (CRMA), developing detailed analytical reports and insights Proficiency with Tableau or similar data visualization tools; demonstrated capability in creating intuitive and impactful dashboards Strong analytical and problem-solving skills, with an ability to interpret complex data and translate insights into actionable business recommendations Excellent communication and interpersonal skills, able to effectively engage with diverse, global stakeholder groups High level of attention to detail, accuracy, and commitment to delivering high-quality work under tight deadlines Ability to thrive in a fast-paced, agile environment, adapting swiftly to shifting priorities Familiarity with sales and marketing systems such as Outreach, Drift, Marketo, and other related platforms is an added advantage About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

About HYI.AI HYI.AI is a Virtual Assistance and GenAI platform built for startups, entrepreneurs, and tech innovators. We specialize in offering virtual talent solutions, GenAI tools, and custom AI/ML deployments to help founders and businesses scale smarter and faster. We’re on a mission to power the next wave of digital startups globally - and we’re looking for talented Full Stack Developers to join us remotely. Role Overview We are seeking an experienced Data Science Engineer to build intelligent systems that deliver business impact through data. You will work at the intersection of engineering and applied science - developing machine learning models, deploying them into production, and ensuring they operate at scale. This freelance role is ideal for professionals who bring both statistical rigor and software engineering depth to real-world problems. Key Responsibilities Design, build, and deploy machine learning models for real-time and batch applications Process and analyze large datasets to extract actionable insights Develop data pipelines and model workflows in production environments Work with product teams to frame data science solutions for business use cases Monitor model performance, retraining, and drift over time Communicate findings through clear visualizations, documentation, and reports What We’re Looking For Experience in data science, machine learning, or applied analytics Strong background in statistics, data structures, and algorithms Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch) Experience building end-to-end ML workflows, from exploration to deployment Familiarity with cloud platforms (AWS, GCP, or Azure) and MLOps practices Ability to work independently and translate ambiguous problems into scalable solutions Preferred Skills Experience with tools like Airflow, MLflow, or Kubeflow Working knowledge of SQL and NoSQL databases Exposure to big data tools (Spark, Hadoop, Dask) Knowledge of model versioning, monitoring, and explainability techniques Experience in domains like NLP, computer vision, or time series analysis is a plus What You’ll Gain Collaborate with forward-thinking startups on data-driven challenges Flexible work hours and remote-first engagements Autonomy to lead technical decisions and build intelligent systems Be part of an exclusive network of top-tier data and tech professionals

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 22 Lacs

Chennai, Tamil Nadu, India

On-site

Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 22 Lacs

Bengaluru, Karnataka, India

On-site

Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 22 Lacs

Hyderabad, Telangana, India

On-site

Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 22 Lacs

Gurugram, Haryana, India

On-site

Generative AI Engineer (Hybrid, India) A fast-growing provider in the Enterprise Software & Artificial Intelligence services sector, we architect and deliver production-ready large-language-model platforms, data pipelines, and intelligent assistants for global customers. Our cross-functional squads blend deep ML expertise with robust engineering practices to unlock rapid business value while maintaining enterprise-grade security and compliance. Role & Responsibilities Design, build, and optimise end-to-end LLM solutions covering data ingestion, fine-tuning, evaluation, and real-time inference. Develop Python micro-services that integrate LangChain workflows, vector databases, and tool-calling agents into secure REST and gRPC APIs. Implement retrieval-augmented generation (RAG) pipelines, embedding models, and semantic search to deliver accurate, context-aware responses. Collaborate with data scientists to productionise experiments, automate training schedules, and monitor drift, latency, and cost. Harden deployments through containerisation, CI/CD, IaC, and cloud GPU orchestration on Azure or AWS. Contribute to engineering playbooks, mentor peers, and champion best practices in clean code, testing, and observability. Skills & Qualifications Must-Have 3-6 years Python backend or data engineering experience with strong OO & async patterns. Hands-on building LLM or GenAI applications using LangChain/LlamaIndex and vector stores such as FAISS, Pinecone, or Milvus. Proficiency in prompt engineering, tokenisation, and evaluation metrics (BLEU, ROUGE, perplexity). Experience deploying models via Azure ML, SageMaker, or similar, including GPU optimisation and autoscaling. Solid grasp of MLOps fundamentals: Docker, Git, CI/CD, monitoring, and feature governance. Preferred Knowledge of orchestration frameworks (Kubeflow, Airflow) and streaming tools (Kafka, Kinesis). Exposure to transformer fine-tuning techniques (LoRA, PEFT, quantisation). Understanding of data privacy standards (SOC 2, GDPR) in AI workloads. Benefits & Culture Highlights Hybrid work model with flexible hours and quarterly in-person sprint planning. Annual upskilling stipend covering cloud certifications and research conferences. Collaborative, experimentation-driven culture where engineers influence product strategy. Join us to turn breakthrough research into real-world impact and shape the next generation of intelligent software. Skills: git,monitoring,oo patterns,ci/cd,llamaindex,python,feature governance,evaluation metrics (bleu, rouge, perplexity),faiss,prompt engineering,cloud,prompt engg,azure ml,agent framework,async patterns,gen ai,langchain,tokenisation,docker,sagemaker,vectordb,milvus,pinecone

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

🚀 About the Role We're looking for a relentlessly curious Data Engineer to join our team as a Marketing Platform Specialist. This role is for someone who will dedicate themselves to extracting every possible byte of data from marketing platforms - the kind of person who gets excited about discovering hidden API endpoints and undocumented features.. You'll craft pristine, high-resolution datasets in Snowflake that fuel advanced analytics and machine learning across the business. 🎯 The Mission Your singular focus: extract every drop of value from the world’s most powerful marketing platforms. Where others use 10% of a tool’s capabilities, you’ll uncover the hidden 90% — from granular auction insights to ephemeral algorithm data. You’ll build the intelligence layer that enables others to scale smarter, faster, and with precision. 🧪 What You’ll Actually Do Platform Data Extraction & Monitoring Reverse-engineer APIs across Meta, Google, TikTok, and others to extract hidden attribution data, auction signals, and algorithmic behavior Exploit beta features and undocumented endpoints to unlock advanced data streams Capture ephemeral data before it disappears — attribution snapshots, pacing drift, algorithm shifts Build real-time monitoring datasets to detect anomalies, pacing issues, and creative decay Scrape competitor websites and dissect tracking logic to reveal platform strategies Business Scaling & Optimization Datasets Build granular spend and performance datasets with dayparting, marginal ROAS, and cost efficiency metrics Develop lookalike audience models enriched with seed performance, overlap scores, and fatigue indicators Create auction intelligence datasets with hour/geo/placement granularity, bid behaviors, and overlap tracking Design optimization datasets for portfolio performance, campaign cannibalization, and creative lifecycle decay Extract machine learning signals from Meta Advantage+, Smart Bidding configs, and TikTok optimization logs Build competitive intelligence datasets with SOV trends, auction pressure, and creative benchmarks Advanced Feature & AI Data Engineering Extract structured data from advanced features like predictive analytics, customer match, and A/B testing tools Build multimodal datasets (ad copy, landing pages, video) ready for machine learning workflows Enrich historical marketing data using automated pipelines and AI-powered data cleaning Unified Customer & Attribution Data Build comprehensive GA4 datasets using precise tagging logic and event architecture Unify data from Firebase, Klaviyo, Tealium, and offline sources into full-funnel CDP datasets Engineer identity resolution pipelines using hashed emails, device IDs, and privacy-safe matching Map cross-device customer journeys with detailed attribution paths and timestamp precision Snowflake Architecture & Analytics Enablement Design and maintain scalable, semantic-layered Snowflake datasets with historical tracking and versioning Use S3 and data lakes to manage large-scale, unstructured data across channels Implement architectures suited for exploration, BI, real-time streaming, and ML modeling — including star schema, medallion, and Data Vault patterns Build dashboards and tools that reveal inefficiencies, scaling triggers, and creative performance decay 🎓 Skills & Experience Required 3+ years as a Data Engineer with deep experience in MarTech or growth/performance marketing Advanced Python for API extraction, automation, and orchestration JavaScript proficiency for tracking and GTM customization Expert-level experience with GA4 implementation and data handling Hands-on experience with Snowflake, including performance tuning and scaling Comfortable working with S3 and data lakes for semi/unstructured data Strong SQL and understanding of scalable data models 🤞Bonus Points if You Have Experience with dbt for transformation and modeling CI/CD pipelines using GitHub Actions or similar Knowledge of vector databases (e.g., Pinecone, Weaviate) for AI/ML readiness Familiarity with GPU computing for high-performance data workflows Data app development using R Shiny or Streamlit 🚀 You'll Excel Here If You Are: Relentlessly curious : You don’t wait for someone to show you the API docs; you find the endpoints yourself Detail-obsessed : You notice the subtle changes, the disappearing fields, the data drift others overlook Self-directed: You don’t need micromanagement. Just give you the mission, and you’ll reverse-engineer the map Comfortable with ambiguity: You can navigate undocumented features, partial datasets, and platform quirks without panic Great communicator: You explain technical decisions clearly, with just enough detail for analysts, marketers, and fellow engineers to move forward Product-minded: You think in terms of impact, not just pipelines. Every dataset you build is a stepping stone to better strategy, smarter automation, or faster growth 🔥Why the Conqueror: ⭐️Shape the data infrastructure powering real business growth 💡Join a purpose-driven, fast-moving team 📈 Work with fast-scaling e-commerce brands 🧠Personal development budget to continuously sharpen your skills 🏠Work remotely from anywhere 💼 3000 - 4200 euros Gross Salary/month 💛 About us We are a growing team of passionate, performance-driven individuals on a mission to be the best at growing multiple e-commerce businesses with great products. For the past 7 years, we’ve gathered a powerful community of over 1 million people globally, empowering people to build and sustain healthy habits in an enjoyable way. Our digital and physical products, the Conqueror Challenges App and epic medals, have helped users walk, run, cycle, swim, and move through the virtual equivalents of iconic routes. We’ve partnered with Warner Bros., Disney, and others to launch global hits like THE LORD OF THE RINGS™, HARRY POTTER™, and STAR WARS™ virtual challenges. Now, we’re stepping into an exciting new chapter! While continuing to grow our core business, we’re actively acquiring and building new e-commerce brands. We focus on using our marketing and operational strengths to scale these businesses, striving to always be outstanding in everything we do and delivering more value to more people.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderābād

Remote

We are united in our mission to make a positive impact on healthcare. Join Us! South Florida Business Journal, Best Places to Work 2024 Inc. 5000 Fastest-Growing Private Companies in America 2024 2024 Black Book Awards, ranked #1 EHR in 11 Specialties 2024 Spring Digital Health Awards, "Web-based Digital Health" category for EMA Health Records (Gold) 2024 Stevie American Business Award (Silver), New Product and Service: Health Technology Solution (Klara) Who we are: We Are Modernizing Medicine (WAMM)! We're a team of bright, passionate, and positive problem-solvers on a mission to place doctors and patients at the center of care through an intelligent, specialty-specific cloud platform. Our vision is a world where the software we build increases medical practice success and improves patient outcomes. Founded in 2010 by Daniel Cane and Dr. Michael Sherling, we have grown to over 3400 combined direct and contingent team members serving eleven specialties, and we are just getting started! ModMed's global headquarters is based in Boca Raton, FL, with a growing office in Hyderabad, India, and a robust remote workforce across the US, Chile, and Germany. ModMed is hiring a driven ML Ops Engineer 2 to join our positive, passionate, and high-performing team focused on scalable ML Systems. This is an exciting opportunity to You as you will collaborate with data scientists, engineers, and other cross-functional teams to ensure seamless model deployment, monitoring, and automation. If you're passionate about cloud infrastructure, automation, and optimizing ML pipelines, this is the role for you within a fast-paced Healthcare IT company that is truly Modernizing Medicine! Key Responsibilities: Model Deployment & Automation: Develop, deploy, and manage ML models on Databricks using MLflow for tracking experiments, managing models, and registering them in a centralized repository. Infrastructure & Environment Management: Set up scalable and fault-tolerant infrastructure to support model training and inference in cloud environments such as AWS, GCP, or Azure. Monitoring & Performance Optimization: Implement monitoring systems to track model performance, accuracy, and drift over time. Create automated systems for re-training and continuous learning to maintain optimal performance. Data Pipeline Integration: Collaborate with the data engineering team to integrate model pipelines with real-time and batch data processing frameworks, ensuring seamless data flow for training and inference. Skillset & Qualification Model Deployment: Experience with deploying models in production using cloud platforms like AWS Sagemaker, GCP AI Platform, or Azure ML Studio. Version Control & Automation: Experience with MLOps tools such as MLflow, Kubeflow, or Airflow to automate and monitor the lifecycle of machine learning models. Cloud Expertise: Experience with cloud-based machine learning services on AWS, Google Cloud, or Azure, ensuring that models are scalable and efficient. Engineers must be skilled in measuring and optimizing model performance through metrics like AUC, precision, recall, and F1-score, ensuring that models are robust and reliable in production settings. Education: Bachelor's or Master's degree in Data Science, Statistics, Mathematics, or a related technical field. ModMed in India Benefit Highlights: High growth, collaborative, transparent, fun, and award-winning culture Comprehensive benefits package including medical for you, your family, and your dependent parents The company supported community engagement opportunities along with a paid Voluntary Time Off day to use for volunteering in your community of interest Global presence, and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability Company-sponsored Employee Resource Groups that provide engaged and supportive communities within ModMed ModMed Benefits Highlight: At ModMed, we believe it's important to offer a competitive benefits package designed to meet the diverse needs of our growing workforce. Eligible Modernizers can enroll in a wide range of benefits: India Meals & Snacks: Enjoy complimentary office lunches & dinners on select days and healthy snacks delivered to your desk, Insurance Coverage: Comprehensive health, accidental, and life insurance plans, including coverage for family members, all at no cost to employees, Allowances: Annual wellness allowance to support your well-being and productivity, Earned, casual, and sick leaves to maintain a healthy work-life balance, Bereavement leave for difficult times and extended medical leave options, Paid parental leaves, including maternity, paternity, adoption, surrogacy, and abortion leave, Celebration leave to make your special day even more memorable, and company-paid holidays to recharge and unwind. United States Comprehensive medical, dental, and vision benefits, including a company Health Savings Account contribution, 401(k): ModMed provides a matching contribution each payday of 50% of your contribution deferred on up to 6% of your compensation. After one year of employment with ModMed, 100% of any matching contribution you receive is yours to keep. Generous Paid Time Off and Paid Parental Leave programs, Company paid Life and Disability benefits, Flexible Spending Account, and Employee Assistance Programs, Company-sponsored Business Resource & Special Interest Groups that provide engaged and supportive communities within ModMed, Professional development opportunities, including tuition reimbursement programs and unlimited access to LinkedIn Learning, Global presence and in-person collaboration opportunities; dog-friendly HQ (US), Hybrid office-based roles and remote availability for some roles, Weekly catered breakfast and lunch, treadmill workstations, Zen, and wellness rooms within our BRIC headquarters. PHISHING SCAM WARNING: ModMed is among several companies recently made aware of a phishing scam involving imposters posing as hiring managers recruiting via email, text and social media. The imposters are creating misleading email accounts, conducting remote "interviews," and making fake job offers in order to collect personal and financial information from unsuspecting individuals. Please be aware that no job offers will be made from ModMed without a formal interview process, and valid communications from our hiring team will come from our employees with a ModMed email address (first.lastname@modmed.com). Please check senders' email addresses carefully. Additionally, ModMed will not ask you to purchase equipment or supplies as part of your onboarding process. If you are receiving communications as described above, please report them to the FTC website.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderābād

Remote

Core Technology: Machine Learning Level: 6+ Years Primary Skills: Google Vertex AI, Python Secondary Skills: ML Models, Gcp Open Positions: 2 Job Location: Hyderabad Work Mode: Remote Deployment Type: Full Time Job Description Primary Skills Required: Strong understanding of MLOps practices Hands-on experience in deploying and productionizing ML models Proficient in Python Experience with Google Vertex AI Solid knowledge of machine learning algorithms such as: XGBoost Classification models BigQuery ML (BQML) Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus. Job Types: Full-time, Permanent, Fresher Schedule: Day shift Morning shift Application Question(s): Do you have the Hands-on experience in deploying and productionizing ML models? Are you proficient in python? Do you have experience with Google Vertex AI? Do you have Solid knowledge of machine learning algorithms such as: XGBoost Classification models BigQuery ML (BQML) ? Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Forsvaret Vil du forvalte eiendom, bygg og anlegg (EBA) for Redningshelikoptertjenesten? 130 Luftvings (Redningshelikoptertjenesten) ledelse er lokalisert på Luftforsvarets stasjon Sola, med underavdelinger på Banak, Bodø, Florø, Rygge og Ørland. Vi ser etter deg som er nøyaktig og fleksibel og har erfaring innen bygg og anlegg. I denne stillingen vil du bli en del av 130 Luftving stab innen EBA-forvaltning på alle baser hvor Redningshelikoptertjenesten har sitt virke. Du vil blant annet ivareta ansvar for drift og forvaltning av EBA, samt lede egne arbeidsgrupper innen eget fagfelt. Du vil i tillegg være pådriver for å utvikle samarbeidet med Forsvarsbygg, samt lede og koordinere Luftvingens helhetlige EBA-prosesser. Vi søker deg som ønsker å bli en del av ett profesjonelt, godt og inkluderende arbeidsmiljø. Hos oss jobber vi alle mot samme mål, nemlig å sørge for at Redningshelikoptertjenesten er i stand til å redde liv. Du vil samarbeide tett med alle avdelingene i 130 Luftving, både lokalt på Sola og basene rundt om i landet. Du vil også fungere som luftvingens primære kontaktpunkt opp mot Forsvarsbygg og hovedredningssentralen innenfor EBA. I tillegg bistår du innenfor HMS-arbeid relatert til EBA, brannvern og ytre miljø. Dette vil gi deg gode muligheter til faglig utvikling, samt å utvikle EBA-faget i Luftvingen. Kvalifikasjoner: Bachelorgrad innen bygg- og anlegg. Minimum 3 års erfaring innen offentlig forvaltning og saksbehandling. Utarbeidelse og behandling av søknader, tillatelser eller vedtak. Saksbehandling i henhold til forvaltningsloven, plan- og bygningsloven eller annet relevant regelverk. Kontakt og samhandling med kommunale eller statlige myndigheter. Kjennskap til habilitet, klageprosesser og innsynsregler. Erfaring med drift av større bygningsmasser Du må kunne sikkerhetsklareres for HEMMELIG / NATO SECRET før tiltredelse God kompetanse i bruk av standard kontorstøtteverktøy, herunder regneark, tekstbehandling, presentasjonsverktøy og e-postsystemer (fortrinnsvis Microsoft Office eller tilsvarende) God muntlig og skriftlig framstillingsevne på norsk Ønskelig Erfaring fra Forsvaret utover førstegangstjenesten og tid som lærling. Kjennskap til Forsvarets organisasjon og virksomhet, med forståelse for hvordan Forsvaret som statlig aktør er strukturert, og hvordan støttefunksjoner som EBA bidrar til den operative virksomheten. Prosjekterfaring Evne til å navigere effektivt i SAP for å registrere, følge opp og hente ut relevant informasjon knyttet til forvaltning og oppfølgning av EBA-tiltak. Personlige egenskaper: Vi ser etter deg som: Er strukturert og systematisk. Du har evnen til å planlegge, prioritere og følge opp oppgaver på en ryddig måte. Er ansvarsbevisst. Du tar eierskap til oppgavene dine og leverer i henhold til fastsatte frister og krav. Er samarbeidsorientert. Du kommuniserer godt og trives med å jobbe på tvers av fagområder og med både militære og sivile aktører. Er initiativrik. Du ser hva som må gjøres og tar initiativ til forbedringer og løsninger uten å vente på klare instrukser. Er fleksibel og tilpasningsdyktig. Du håndterer endringer og nye prioriteringer på en konstruktiv måte i en dynamisk hverdag. Har god rolleforståelse. Du evner å tilpasse deg Forsvarets struktur og krav, og forstår din rolle i en stab med operative og administrative funksjoner. Har høy integritet og er sikkerhetsbevisst. Du jobber samvittighetsfullt i tråd med gjeldende rutiner, regler og sikkerhetsbestemmelser. Vi tilbyr: Gode muligheter for personlig og faglig utvikling med mange og spennende arbeidsoppgaver Fleksibel arbeidstid med bl.a trening i arbeidstiden Kurs og opplæring etter behov Lønn etter statens lønnsregulativ som Seniorkonsulent, stillingskode 1363, for tiden kr 634.221 - 730.941 pr år. Av lønnen trekkes 2 % lovbestemt innskudd til Statens pensjonskasse På grunn av ferieavvikling starter seleksjon- og intervjuprosessen i august. Bedrift Forsvaret Søknadsfrist 27.07.2025 Heltid/Deltid Heltid Ansettelsesform Fast Utdanningskrav Påkrevet Arbeidserfaring Ønskelig Adresse 4050, SOLA SOLA Antall stillinger 1 karriere-kode 6281893 Se her for andre jobber fra Forsvaret Bedrift Forsvaret Søknadsfrist 27.07.2025 Heltid/Deltid Heltid Ansettelsesform Fast Utdanningskrav Påkrevet Arbeidserfaring Ønskelig Adresse 4050, SOLA SOLA Antall stillinger 1 karriere-kode 6281893

Posted 1 month ago

Apply

162.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Area(s) of responsibility About Birlasoft Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CK Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – Ability to relate the product functionality to business processes, and thus offer implementation advice to customers on how to meet their various business scenarios using Oracle SCM. Job Title – Application Developer Location: Bangalore/Pune Educational Background – BE/Btech Key Responsibilities – Must Have Skills: Must have skills: Overall - 7+ Years Relevant - 5+ Years 5+ years of Automation development experience using Ansible, Terraform, Chef, Python, Shell Scripts etc as the automation tool. Develop automation for different infrastructure technologies and services (Network, Storage, Middleware, Database etc.) using Northern’s approved automation tools and technologies. ITSM process experience utilizing ticketing tools like Remedy or Service-Now. At least 3 years in supporting BFSI Organization/Client Nice to Haves: Experience with developing and automating CI/CD pipelines using GitHub Actions and Terraform. Experience in identifying drift patterns and implementing remediation workflows and experience on CMDB. Knowledge in designing drift/configuration detection and management solution using Ansible, Chef, Terraform etc. Experience installing and administering applications in Unix, Linux, and Windows environments including debugging and command line Familiarity with IaaS (Infrastructure as a Service) or PaaS (Platform as a Service) Experience in implementing automated solutions for disaster recovery of web application stack. Familiarity with development tools, e.g., JIRA, BitBucket, Bamboo, Maven. Familiarity with Service-oriented Architecture (SOA), web services, SOAP, and JSON. Familiarity with REST API integration Familiarity with containerization concepts, Docker, Kubernetes a plus.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company : GSPANN is a global IT services and consultancy provider headquartered in Milpitas, California (U.S.A.). With five global delivery centers across the globe, GSPANN provides digital solutions that support the customer buying journeys of B2B and B2C brands worldwide. With a strong focus on innovation and client satisfaction, GSPANN delivers cutting-edge solutions that drive business success and operational excellence. GSPANN helps retail, finance, manufacturing, and high-technology brands deliver competitive customer experiences and increased revenues through our solution delivery, technologies, practices, and operations for each client. For more information, visit www.gspann.com Job Position (Title):AI Ops + ML Ops Engineer Experience Required:4+ Years Job Type: Fulltime Number of positions: 3 (1 Senior and 2 Junior) Location: Hyderabad/Pune/Gurugram Technical Skill Requirements Mandatory Skills - ML Ops, Devops, Python, Cloud, AI Ops Role and Responsibilities Strategic & Leadership: Architect and lead the implementation of scalable AIOps and MLOps frameworks. Mentor junior engineers and data scientists on best practices in model deployment and operational excellence. Collaborate with product managers, SREs, and business stakeholders to align technical strategies with organizational goals. Define and enforce engineering standards, SLAs, SLIs, and SLOs for ML and AIOps services. MLOps Focus: Design and manage ML CI/CD pipelines for training, testing, deployment, and monitoring using tools like Kubeflow, MLflow, or Airflow . Implement advanced monitoring for model performance (drift, latency, accuracy) and automate retraining workflows. Lead initiatives on model governance, reproducibility, traceability, and compliance (e.g., FAIR, audit logging). AIOps Focus: Develop AI/ML-based solutions for proactive infrastructure monitoring, predictive alerting, and intelligent incident management. Integrate and optimize observability tools ( Prometheus, Grafana, ELK, Dynatrace, Splunk, Datadog ) for anomaly detection and root cause analysis. Automate incident response workflows using playbooks, runbooks, and self-healing mechanisms. Use statistical methods and ML to analyze logs, metrics, and traces at scale. Required Skills 4+ years of experience in DevOps, MLOps, or AIOps, with at least 2+ years in a leadership or senior engineering role. Expert-level proficiency in Python , Bash , and familiarity with Go or Java . Deep experience with containerization (Docker) , orchestration (Kubernetes) , and cloud platforms (AWS, GCP, Azure) . Proficient with CI/CD tools and infrastructure-as-code ( Terraform, Ansible, Helm ). Strong understanding of ML lifecycle management, model monitoring, and data pipeline orchestration. Experience deploying and maintaining large-scale observability and telemetry systems. Preferred Qualifications: Experience with streaming data platforms: Kafka, Spark, Flink . Hands-on experience with Service Mesh (Istio/Linkerd) for traffic and security management. Familiarity with data security, privacy, and compliance standards (e.g., GDPR, HIPAA). Certifications in AWS/GCP DevOps, Kubernetes, or MLOps are a strong plus. Why choose GSPANN At GSPANN, we don’t just serve our clients—we co-create. The GSPANNians are passionate technologists who thrive on solving the toughest business challenges, delivering trailblazing innovations for marquee clients. This collaborative spirit fuels a culture where every individual is encouraged to sharpen their skills, feed their curiosity, and take ownership to learn, experiment, and succeed. We believe in celebrating each other’s successes—big or small—and giving back to the communities we call home. If you’re ready to push boundaries and be part of a close-knit team that’s shaping the future of tech, we invite you to carry forward the baton of innovation with us. Let’s Co-Create the Future—Together. Discover Your Inner Technologist Explore and expand the boundaries of tech innovation without the fear of failure. Accelerate Your Learning Shape your career while scripting the future of tech. Seize the ample learning opportunities to grow at a rapid pace. Feel Included At GSPANN, everyone is welcome. Age, gender, culture, and nationality do not matter here, what matters is YOU. Inspire and Be Inspired When you work with the experts, you raise your game. At GSPANN, you’re in the company of marquee clients and extremely talented colleagues. Enjoy Life We love to celebrate milestones and victories, big or small. Ever so often, we come together as one large GSPANN family. Give Back Together, we serve communities. We take steps, small and large so we can do good for the environment, weaving in sustainability and social change in our endeavors. We invite you to carry forward the baton of innovation in technology with us. Let’s Co-Create

Posted 1 month ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About HighRadius HighRadius is a Fintech enterprise Software-as-a-Service (SaaS) company which leverages Artificial Intelligence-based Autonomous Systems to help companies automate Accounts Receivable and Treasury processes. The HighRadius products like Integrated Receivables platform and RadiusOne AR suite reduce cycle times in order-to-cash processes by automating receivables and payment processes across credit, electronic billing and payment processing, cash application, deductions, and collections. With offices located in the USA, UK and India, our products provide value to a wide range of customers and are especially relevant to industries like consumer products, manufacturing, distribution, energy, and others that sell products or provide a service to other businesses. About the Role As a Chat Marketing Specialist, your primary responsibility will be interacting with website visitors through an automated chat system. You will need to have a deep understanding of the website architecture, HighRadius’ products, services, and offerings. You should be able to have a real-time meaningful conversation with a website visitor based on outbound messaging or inbound requests. One of the most important aspects would include providing relevant information to the website visitors to generate high quality marketing leads (MQLs). Essential duties of the position: ● Understanding website visitor's profiles based on their past to present interaction on the website and other sources of information about the visitor ● Responding to visitor's inbound sales queries ● Sending outbound messages to the website visitors taking into account current website activity, past interactions, information on file, etc. ● Ability to have meaningful process related conversations with a visitor to generate MQLs who are interested in a conversation with the sales team ● Collaborating with other teams inside sales, direct sales, customer success, partners, and others based on the requirement ● Analyzing visitor interactions to identify trends, common issues, and opportunities for process improvement ● Updating the chatbot's responses and knowledge base as needed to ensure accuracy and relevance Desired requirements: ● Graduation in any field ● 2-3 years of experience in sales/inside sales, non-voice customer service (through live chat), technical support, and related fields is preferred ● Prior experience of working on sales chatbot platforms like Qualified, Drift etc. will be a huge plus ● Digital and tech-savvy with strong written and verbal communication skills ● Should be able to multitask and be comfortable using Salesforce CRM ● Open to work in rotational shifts, 24X5 in a target-driven environment About HighRadius and the Role of the Website Chatbot in our Digital Marketing Strategy: HighRadius is - a Fintech Unicorn, a centaur (>$100 Million ARR), and a Gartner Magic Quadrant leader 👋 We - are building "autonomous" finance software 💡 Process Trillions (with a T) of $ of the worth of financial data 💵 Have the trust of 700+ fortune companies, including Unilever and Google. HighRadius was bootstrapped for 10+ years Then, we raised $475 million 🙌 We have the "frugality" of a bootstrapped company and an "ambition" to build a long-lasting company beyond IPO Chatbot strategy plays a pivotal role in HighRadius’ lead generating motion, and hence drives high ARR for the company. This is also a channel which requires continuous innovation. Hence, it is expected that this coveted team has the ability to think on their accurately assess the incoming visitor’s intent and give the best possible direction to the visitor by converting them into the sales funnel What You’ll Get Competitive salary. Fun-filled work culture (https://www.highradius.com/culture/) Equal employment opportunities. Opportunity to build with a pre-IPO Global SaaS Centaur.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

GCP Infrastructure Lead Location: Bangalore, Pune Exp: 6+ Years Responsibilities: 5+ years of demonstrated relevant experience deploying and supporting public cloud Infrastructure (GCP as primary) IaaS and PaaS. Experience in configuring and managing the GCP infrastructure environment components Foundation components – Networking (VPC, VPN, Interconnect, Firewall and Routes), IAM, Folder Structure, Organization Policy, VPC Service Control, Security Command Center etc. Application Components - BigQuery, Cloud Composer, Cloud Storage, Google Kubernetes Engine (GKE), Compute Engine, Cloud SQL, Cloud Monitoring, Dataproc, Data Fusion, Big Table, Dataflow etc. Design and implement Identity and Access Management (IAM) policies, custom roles, and service accounts across GCP projects and organizations. Implement and maintain Workload Identity Federation, IAM Conditions, and least-privilege access models. Integrate Google Cloud audit logs, access logs, and security logs with enterprise SIEM tools (e.g., Splunk, Chronicle, QRadar, or Exabeam). Configure Cloud Logging, Cloud Audit Logs, and Pub/Sub pipelines for log export to SIEM. Collaborate with the Security Operations Center (SOC) to define alerting rules and dashboards based on IAM events and anomalies. Participate in threat modeling and incident response planning involving IAM and access events. Maintain compliance with regulatory and internal security standards (e.g., CIS GCP Benchmark, NIST, ISO 27001). Monitor and report on IAM posture, access drift, and misconfigurations. Support periodic access reviews and identity governance requirements. Required Skills and Abilities: Mandatory Skills – GCP Networking (VPC, Firewall, Routes & VPN),CI/CD Pipelines, Terraform, Shell Scripting/Python Scripting Secondary Skills – Composer, BigQuery, GKE, Dataproc Good To Have - Certifications in any of the following: GCP Professional Cloud Architect, Cloud Devops Engineer, Cloud Security Engineer, Cloud Network Engineer Participate in incident discussions and work with the Team towards resolving platform issues. Good verbal and written communication skills. Ability to communicate with customers, developers, and other stakeholders. Mentor and guide team members Good Presentation skills Strong Team Player About Us: We are a global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation. We have our own products! Eagle – Data warehouse Assessment & Migration Planning Product Raven – Automated Workload Conversion Product Pelican – Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Job Description - AI Data Scientist Location: Remote Department: Data & AI Engineering Employment Type: Full-time Experience Level: Mid-level About the Role: We are seeking an experienced AI Data Engineer to design, build, and deploy data pipelines and ML infrastructure to power scalable AI/ML solutions. This role involves working at the intersection of data engineering, MLOps, and model deployment—supporting the end-to-end lifecycle from data ingestion to model production. Key Responsibilities: Data Engineering & Development Design, develop, and train AI models to solve complex business problems and enable intelligent automation. Design, develop, and maintain scalable data pipelines and workflows for AI/ML applications. Ingest, clean, and transform large volumes of structured and unstructured data from diverse sources (APIs, streaming, databases, flat files). Build and manage data lakes, data warehouses, and feature stores. Prepare training datasets and implement data preprocessing logic. Perform data quality checks, validation, lineage tracking, and schema versioning. Model Deployment & MLOps Package and deploy AI/ML models to production using CI/CD workflows. Implement model inference pipelines (batch or real-time) using containerized environments (Docker, Kubernetes). Use MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI) for model tracking, versioning, and deployment. Monitor deployed models for performance, drift, and reliability. Integrate deployed models into applications and APIs (e.g., REST endpoints). Platform & Cloud Engineering Manage cloud-based infrastructure (AWS, GCP, or Azure) for data storage, compute, and ML services. Automate infrastructure provisioning using tools like Terraform or CloudFormation. Optimize pipeline performance and resource utilization for cost-effectiveness. Requirements: Must-Have Skills Bachelor's/Master’s in Computer Science, Engineering, or related field. 2+ years of experience in data engineering, ML engineering, or backend infrastructure. Proficient in Python, SQL, and data processing frameworks (e.g., Spark, Pandas). Experience with cloud platforms (AWS/GCP/Azure) and services like S3, BigQuery, Lambda, or Databricks. Hands-on experience with CI/CD, Docker, and container orchestration (Kubernetes, ECS, EKS). Preferred Skills Experience deploying ML models using frameworks like TensorFlow, PyTorch, or Scikit-learn. Familiarity with API development (Flask/FastAPI) for serving models. Experience with Airflow, Prefect, or Dagster for orchestrating pipelines. Understanding of DevOps and MLOps best practices. Soft Skills: Strong communication and collaboration with cross-functional teams. Proactive problem-solving attitude and ownership mindset. Ability to document and communicate technical concepts clearly.

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

Job Overview We are seeking a highly skilled and self-sufficient Senior Flutter Developer to join our team. This role is ideal for someone who thrives in a fast-paced environment, is passionate about solving real-world problems, and has deep expertise in architecting and building scalable mobile applications. You will take the lead in developing and maintaining complex features, including real-time chat, location-based tracking, authentication, and push notifications. Key Responsibilities Architect and implement mobile apps using Flutter with clean architecture principles Build and maintain complex state management systems using BLoC Develop robust chat features using Socket.IO for real-time messaging Integrate secure authentication flows, including OAuth providers like Google Design and maintain offline-first capabilities using local storage (Hive, Drift, ObjectBox) Develop and monitor real-time map-based features including live tracking and geofencing Ensure high performance, testability, and maintainability of the codebase Collaborate with cross-functional teams on system design, product features, and delivery Key Qualifications Minimum of 4 years of hands-on experience with Flutter and Dart Strong understanding of clean architecture (domain, data, and presentation separation) Advanced experience using BLoC for managing complex state transitions Experience integrating and managing Socket.IO for real-time communication Experience building chat systems from scratch with support for offline messaging, syncing, and background notifications Deep familiarity with local storage and caching strategies for chat and real-time apps Experience implementing user authentication and account management flows Solid understanding of OOP principles and design patterns such as Repository, Factory, Singleton Familiarity with push notification handling in all app states (foreground, background, terminated) Demonstrated ability to work independently and drive projects to completion Preferred Experience Previous experience building chat or delivery applications Experience with notification, and Authentication Experience with background location tracking and optimizing battery usage Strong testing discipline with knowledge of unit and integration testing for BLoC and services What We Offer A high-impact role with the opportunity to take ownership of core application features A collaborative environment focused on quality, performance, and usability The chance to work on challenging problems with real-world impact Remote and flexible working conditions How to Apply Please submit your resume, portfolio or GitHub link, and a brief cover letter outlining your relevant experience to info@cleaninv.com.

Posted 1 month ago

Apply

5.0 years

2 - 8 Lacs

Hyderābād

On-site

AI/ML & GenAI Expertise 5+ years of experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture , including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. Nice-to-Have Skills Experience in startup or fast-paced product environments. 5+ years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features

Posted 1 month ago

Apply

3.0 years

6 - 9 Lacs

Bengaluru

On-site

JOB DESCRIPTION You belong to the top echelon of talent in your field. At one of the world's most iconic financial institutions, where infrastructure is of paramount importance, you can play a pivotal role. As an Infrastructure Engineer III at JPMorgan Chase within the Infrastructure Platforms team, you utilize strong knowledge of software, applications, and technical processes within the infrastructure engineering discipline. Apply your technical knowledge and problem-solving methodologies across multiple applications of moderate scope. Job responsibilities Applies technical knowledge and problem-solving methodologies to projects of moderate scope, with a focus on improving the data and systems running at scale, and ensures end to end monitoring of applications Resolves most nuances and determines appropriate escalation path Executes conventional approaches to build or break down technical problems Drives the daily activities supporting the standard capacity process applications Collaborating with other mainframe technical teams to provide architectural and technical guidance Developing automation, tooling, reports, and utilities to assist in mainframe access administration Participating in on call and off-hours technology events Required qualifications, capabilities, and skills Formal training or certification on infrastructure disciplines concepts and 3+ years applied experience Strong knowledge of one or more infrastructure disciplines such as hardware, networking terminology, databases, storage engineering, deployment practices, integration, automation, scaling, resilience, and performance assessments Initialize new DASD volumes and Add new tape volumes. Perform the space management activities such as VTOC resizing, Defrag, Compress / Reclaim / Release / Cleaning of datasets. Manage and Define SMS rules and ACS routines along with reorganization of storage dataset (HSM CDS & Catalog file). Strong knowledge of various storage products like Catalog Recovery Plus, Vantage, STOPX37, DSS, JCLs, VSAM, HSM, CA1, CSM, DS8K, VTS and GDPS replication. Adopt the given technology to meet the drift of customer and business requirements. Demonstrated problem determination and resolution within expected time frame. Root Cause Analysis preparation and meeting Service level agreement for submission. Hands on experience with disaster recovery planning and performing the recovery test (Tape recovery / DASD Replication / Tape Replication) and Managing the Copy services / GDPS. Has the ability to develop, document and maintain procedures for system utilities such as backup/restore, performance tuning and configuration of environments as well as incremental backups as required. Preferred qualifications, capabilities, and skills Strong problem-solving skills Excellent verbal and written communication skills Strong knowledge in programming in REXX or other languages strongly desire ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics Purpose of the role We seek a highly skilled Senior Machine Learning Engineer / Senior Data Scientist to design, develop, and deploy advanced machine learning models and systems. The ideal candidate will have deep expertise in machine learning algorithms, data processing, and model deployment, with a proven track record of delivering scalable AI solutions in production environments. This role requires strong technical leadership, collaboration with cross-functional teams, and a passion for solving complex problems. Key tasks & accountabilities Model Development: Design, develop, and optimize machine learning models for various applications, including but not limited to natural language processing, computer vision, and predictive analytics. Data Pipeline Management: Build and maintain robust data pipelines for preprocessing, feature engineering, and data augmentation to support model training and evaluation. Model Deployment: Deploy machine learning models into production environments, ensuring scalability, reliability, and performance using tools like Docker, Kubernetes, or cloud platforms preferably Azure. Research and Innovation: Stay updated on the latest advancements in machine learning and AI, incorporating state-of-the-art techniques into projects to improve performance and efficiency. Collaboration: Work closely with data scientists, software engineers, product managers, and other stakeholders to translate business requirements into technical solutions. Performance Optimization: Monitor and optimize model performance, addressing issues like model drift, bias, and scalability challenges. Code Quality: Write clean, maintainable, and well-documented code, adhering to best practices for software development and version control (e.g., Git). Mentorship: Provide technical guidance and mentorship to junior engineers, fostering a culture of learning and innovation within the team. 3. Qualifications, Experience, Skills Level Of Educational Attainment Required Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field. PhD is a plus. Previous Work Experience 5+ years of experience in machine learning, data science, or a related field. Proven experience in designing, training, and deploying machine learning models in production. Hands-on experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). Technical Skills Required Proficiency in Python and libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Hugging Face. Strong understanding of machine learning algorithms (e.g., regression, classification, clustering, deep learning, reinforcement learning, optimization). Experience with big data technologies (e.g., Hadoop, Spark, or similar) and data processing pipelines. Familiarity with MLOps practices, including model versioning, monitoring, and CI/CD for ML workflows. Knowledge of software engineering principles, including object-oriented programming, API development, and microservices architecture. Other Skills Required Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, dynamic environment and manage multiple priorities. Experience with generative AI models or large language models (LLMs). Familiarity with distributed computing or high-performance computing environments. And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 1 month ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise a multi-year AI-platform vision and reference architecture that advances Amgen’s digital-transformation, cloud-modernisation and product-delivery objectives. Design and evolve foundational platform components —feature stores, model registry, experiment tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Establish modelling and algorithm-selection standards that span classical ML, tree-based ensembles, clustering, time-series, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques; advise product squads on choosing and operationalising the right algorithm for each use-case. Orchestrate the full delivery pipeline for AI solutions —pilot → regulated validation → production rollout → post-launch monitoring—defining stage-gates, documentation and sign-off criteria that meet GxP/CSV and global privacy requirements. Scale AI workloads globally by engineering autoscaling GPU/CPU clusters, distributed training, low-latency inference and cost-aware load-balancing, maintaining <100 ms P95 latency while optimising spend. Implement robust MLOps and release-management practices (CI/CD for models, blue-green & canary deployments, automated rollback) to ensure zero-downtime releases and auditable traceability. Embed responsible-AI and security-by-design controls —data privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Package reusable solution blueprints and APIs that enable product teams to consume AI capabilities consistently, cutting time-to-production by ≥ 50 %. Provide deep technical mentorship and architecture reviews to product squads, troubleshooting performance bottlenecks and guiding optimisation of cloud resources. Develop TCO models and FinOps practices, negotiate enterprise contracts for cloud/AI infrastructure and deliver continuous cost-efficiency improvements. Establish observability frameworks —metrics, distributed tracing, drift detection, SLA dashboards—to keep models performant, reliable and compliant at scale. Track emerging technologies and regulations (serverless GPUs, confidential compute, EU AI Act) and integrate innovations that maintain Amgen’s leadership in enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role Name: Principal Data Scientist Department Name: AI & Data Science Role GCF: 6 Hiring Manager Name: Swaroop Suresh About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Principal AI Platform Architect —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Define and evangelise the multi-year AI-platform vision, architecture blueprints and reference implementations that align with Amgen’s digital-transformation and cloud-modernization objectives. Design and evolve foundational platform components—feature stores, model-registry, experiment-tracking, vector databases, real-time inference gateways and evaluation harnesses—using cloud-agnostic, micro-service principles. Implement robust MLOps pipelines (CI/CD for models, automated testing, canary releases, rollback) and enforce reproducibility from data ingestion to model serving. Embed responsible-AI and security-by-design controls—data-privacy, lineage tracking, bias monitoring, audit logging—through policy-as-code and automated guardrails. Serve as the ultimate technical advisor to product squads: codify best practices, review architecture/PRs, troubleshoot performance bottlenecks and guide optimisation of cloud resources. Partner with Procurement and Finance to develop TCO models, negotiate enterprise contracts for cloud/AI infrastructure, and continuously optimise spend. Drive platform adoption via self-service tools, documentation, SDKs and internal workshops; measure success through developer NPS, time-to-deploy and model uptime SLAs. Establish observability frameworks—metrics, distributed tracing, drift detection—to ensure models remain performant, reliable and compliant in production. Track emerging technologies (serverless GPUs, AI accelerators, confidential compute, policy frameworks like EU AI Act) and proactively integrate innovations that keep Amgen at the forefront of enterprise AI. Must-Have Skills: 5-7 years in AI/ML, data platforms or enterprise software, including 3+ years leading senior ICs or managers. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science, Data Science or MBA with AI focus. Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 10-14 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 12-17 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

0 years

6 - 12 Lacs

Indore, Madhya Pradesh, India

On-site

About Us Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution. What We Build Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics. DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms. ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection. High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion. Evaluation Process HR Discussion – A brief conversation to understand your motivation and alignment with the role. Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach. Take-Home Assignment – Assesses research ability, learning agility, and structured thinking. Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning. Final Interview – A concluding round to explore your background, interests, and team fit in depth. Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision. Blockchain Data & ML Engineer As a Blockchain Data & ML Engineer, you’ll work on ingesting and modeling on-chain behavior, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction. What You’ll Work On Build and maintain ETL pipelines for ingesting and processing blockchain data. Assist in designing, training, and validating machine learning models for prediction and anomaly detection. Evaluate model performance, tune hyperparameters, and document experimental results. Develop monitoring tools to track model accuracy, data drift, and system health. Collaborate with infrastructure and execution teams to integrate ML components into production systems. Design and maintain databases and storage systems to efficiently manage large-scale datasets. Ideal Traits Strong in data structures, algorithms, and core CS fundamentals. Proficiency in any programming language Curiosity about how blockchain systems and crypto markets work under the hood. Self-motivated, eager to experiment and learn in a dynamic environment. Bonus Points For Hands-on experience with pandas, numpy, scikit-learn, or PyTorch. Side projects involving automated ML workflows, ETL pipelines, or crypto protocols. Participation in hackathons or open-source contributions. What You’ll Gain Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology. Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one. Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions. Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters What We Value Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes. Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here. Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build. Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface. Compensation INR 6 - 12 LPA Performance Bonuses: Linked to contribution, delivery, and impact. Skills:- Blockchain, ETL, Artificial Intelligence (AI), Generative AI, Python, PyTorch, pandas and NumPy

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies