Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
2 - 6 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Working as individual contributor in data scientist role and providing mentorship, guidance, and technical expertise to the team Interact with business stakeholders to understand their AI/ML requirements and translate them into actionable projects Collaborate with cross-functional teams to define project goals, scope, and deliverables Utilize your expertise in Python, pandas, numpy, and SQL to develop AI/ML solutions In-depth understanding & extensive experiencing with Transformers, RAG, Vector DB and Large Language Models Apply frameworks such as Scikit Learn, TensorFlow, and PyTorch to build and deploy models Conduct exploratory data analysis (EDA), statistical analysis, and feature engineering to derive insights Build, tune, and evaluate machine learning models for predictive and prescriptive analytics Perform drift analysis and ensure the responsible use of AI/ML by considering bias and fairness testing Oversee the full life cycle of AI/ML projects, including data preparation, model development, tuning, and deployment Ensure the scalability, reliability, and efficiency of AI/ML solutions in production environments Stay updated with the latest advancements in AI/ML techniques and tools, and identify opportunities to apply them to enhance existing solutions Document and communicate findings, methodologies, and insights to technical and non-technical stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field 4+ years of experience in Data Science, AI/ML, or a similar role Experience in leading a team of data scientists Hands-on experience in delivering production-grade AI/ML projects Solid understanding of mathematical and statistical concepts Knowledge of the full life cycle of AI/ML projects, including EDA, model development, tuning, and drift analysis Understanding of bias/fairness testing and responsible use of AI/ML Proficiency in frameworks such as Scikit Learn, TensorFlow, and PyTorch Proven solid programming skills in Python, with experience in pandas, numpy, and SQL Proven excellent leadership, problem-solving, and analytical thinking skills Proven solid communication and collaboration skills to work effectively with business stakeholders and cross-functional teams At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 1 month ago
0 years
0 Lacs
India
Remote
About Turing: Turing is one of the world’s fastest-growing AI companies, accelerating the advancement and deployment of powerful AI systems. Turing helps customers in two ways: Working with the world’s leading AI labs to advance frontier model capabilities in thinking, reasoning, coding, agentic behavior, multimodality, multilinguality, STEM and frontier knowledge; and leveraging that work to build real-world AI systems that solve mission-critical priorities for companies. Role Overview: As an infrastructure engineer specializing in CDK for Terraform (CDKTF) using TypeScript, you will design, develop, and maintain cloud infrastructure using modern infrastructure-as-code practices. Leveraging your expertise in TypeScript and Terraform, you will create reusable constructs that define scalable and secure cloud resources. Your day-to-day responsibilities include authoring Terraform configurations via TypeScript constructs, managing deployment lifecycles with Terraform CLI commands, and ensuring code quality through testing with Jest. You will collaborate closely with DevOps and cloud teams to troubleshoot deployment issues, optimize infrastructure performance, and ensure compliance with best practices. This role demands strong programming skills, familiarity with Terraform core concepts such as state, modules, and providers, and a pragmatic approach to infrastructure automation that supports fast-paced, scalable cloud environments. What does day-to-day look like: Design, develop, and maintain infrastructure-as-code projects using CDK for Terraform (CDKTF) with TypeScript. Write reusable, modular TypeScript constructs that represent Terraform resources and modules. Manage Terraform state and lifecycle effectively within CDKTF projects. Use Terraform CLI commands such as terraform init, terraform validate, terraform plan, and terraform apply to deploy infrastructure changes. Write Jest-based unit tests to validate CDKTF constructs and configurations. Collaborate with DevOps and cloud engineering teams to deliver automated, reliable, and scalable infrastructure. Troubleshoot and resolve issues related to Terraform state, drift, and deployment failures. Maintain clear documentation of infrastructure code and deployment processes. Keep up-to-date with Terraform and CDKTF ecosystem improvements and best practices. Requirements: Strong proficiency with CDK for Terraform (CDKTF) using TypeScript. Solid knowledge of TypeScript fundamentals: interfaces, classes, modules, and typing. Hands-on experience writing Terraform configurations and resource blocks programmatically via TypeScript in CDKTF. Understanding of Terraform core concepts such as state management, modules, and providers. Ability to debug TypeScript CDKTF code, write unit tests using Jest, and ensure high code quality. Experience with Terraform CLI commands (terraform init, terraform validate, terraform plan, terraform apply). Familiarity with infrastructure-as-code best practices and automation workflows. Comfortable troubleshooting Terraform state issues and resource lifecycle management. Knowledge of cloud infrastructure provisioning and the Terraform ecosystem. Perks of Freelancing With Turing: Work in a fully remote environment. Opportunity to work on cutting-edge AI projects with leading LLM companies. Potential for contract extension based on performance and project needs. Offer Details: Commitments Required : at least 4 hours per day and minimum 20/30/40 hours per week with 4 hours overlap with PST Employment type : Contractor position (no medical/paid leave) Duration of contract : 1 month; [expected start date is next week] Evaluation Process (approximately 90 mins): Two rounds of interviews (60 min technical + 30 min cultural & offer discussion)
Posted 1 month ago
5.0 years
0 Lacs
India
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Key Responsibilities Maintain and evolve Terraform modules across core services. Enhance GitHub Actions and GitLab CI pipelines with policy-as-code integrations. Automate Kubernetes secret management and migrate from shared init containers to native methods. Review and deploy Helm charts for service releases. Own rollback reliability. Track and resolve environment drift. Automate consistency checks between environments. Drive incident response tooling (Datadog + PagerDuty) and participate in post-incident reviews. Assist in cost-optimization efforts through resource sizing reviews. Implement and monitor standardized SLA/SLO targets for key services. Requirements Minimum of 5 years of hands-on experience in DevOps or Platform Engineering roles. Deep technical knowledge of Terraform, Terraform Cloud, and infrastructure module design. Production experience managing Kubernetes clusters (preferably on GKE). Demonstrable expertise in CI/CD automation (GitHub Actions, ArgoCD, Helm-based deployments). Proficient in securing cloud-native environments and integrating with secret management solutions such as Google Secret Manager or HashiCorp Vault. Hands-on experience in observability tooling, especially with Datadog. Strong grasp of GCP networking, service configurations, and container workload security. Proven ability to lead engineering initiatives, work cross-functionally, and manage infrastructure roadmaps. Desirable Experience Background in implementing GitOps and automated infrastructure policy enforcement. Familiarity with service mesh, workload identity, and multi-cluster deployments. Prior experience establishing DevOps functions or maturing legacy environments. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 1 month ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Overview As a Test Automation Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX) , backend APIs , and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy , fairness , stability , and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 7–10 years in test automation, with 2+ years validating AI/ML systems . Expertise in: Automation tools : Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD : Jenkins, GitHub Actions, GitLab AI/ML testing : Model validation, drift detection, GenAI output evaluation Languages : Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools : MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows . Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines • Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation • Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias • Familiarity with MLOps pipelines and automated validation of model performance in production • Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps).
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
Position Title: MLOps Engineer Experience: 5+ Years Location: Remote Employment Type: Full-Time About the Role: We are looking for an experienced MLOps Engineer to lead the design, deployment, and maintenance of scalable and production-grade machine learning infrastructure. The ideal candidate will have a strong foundation in MLOps principles, expertise in GCP (Google Cloud Platform), and a proven track record in operationalizing ML models in cloud environments. Key Responsibilities: Design, build, and maintain scalable ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for training, validation, deployment, and monitoring using Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Collaborate closely with Data Scientists to transition models from experimentation to production. Implement robust monitoring systems for model drift, performance degradation, and data quality issues. Manage containerized ML workloads using Docker and Kubernetes (GKE). Set up and manage CI/CD workflows for ML systems using Cloud Build, Jenkins, Bitbucket, or similar tools. Ensure model security, versioning, governance, and compliance throughout the ML lifecycle. Create and maintain documentation, reusable templates, and artifacts for reproducibility and audit readiness. Required Skills & Experience: Minimum 5 years of experience in MLOps, ML Engineering, or related roles. Strong programming skills in Python with experience in ML frameworks and libraries. Hands-on experience with GCP services including Vertex AI, BigQuery, GKE, and Dataflow. Solid understanding of machine learning concepts and algorithms such as XGBoost and classification models. Experience with container orchestration using Docker and Kubernetes. Proficiency in implementing CI/CD practices for ML workflows. Strong analytical, problem-solving, and communication skills.
Posted 1 month ago
0.0 - 1.0 years
0 Lacs
India
Remote
AI/ML Intern (Full Time - Paid Internship) Location: Remote Experience: 0-1 year Start Date: Immediate / As per availability Stipend: Competitive About SolvusAI: SolvusAI is a fast-growing AI consulting and solutioning firm helping global enterprises unlock the power of Generative AI and Machine Learning to solve real business problems. We work across industries to deliver high-impact AI strategies, advisory, and implementation solutions that create tangible value. At SolvusAI, we value curiosity, ownership, and adaptability. Whether it's through rapid prototyping, cloud-native architectures, or cutting-edge LLMs, we believe in moving fast and building smart. Join us, and be part of a collaborative, future-focused team that’s redefining what AI can do for the enterprise. Are you driven by curiosity and a passion for AI, Gen-AI, and data-driven innovation? Do you want to work at the forefront of AI development - building intelligent agents, optimizing scalable workflows, and applying cutting-edge models to real-world challenges? Key Responsibilities Design and implement scalable, maintainable, and robust AI/ML systems Build and fine-tune ML/DL models using core concepts like feature engineering, model evaluation, and architecture selection (e.g., CNNs, Transformers) to solve business problems Work on agentic workflows and intelligent orchestration of LLMs using tools like LangChain or Semantic Kernel Design and deploy RESTful APIs for AI services and intelligent pipelines Integrate and evaluate vector databases (e.g., FAISS, Chroma, Pinecone) with LLMs for RAG-based systems Validate and monitor AI models through rigorous testing (unit, integration, adversarial) ensuring generalization and fairness Identify and mitigate risks related to bias, drift, and hallucinations in generative models Deploy solutions to cloud environments (AWS, Azure, GCP) using CI/CD principles Collaborate closely with cross-functional teams to translate business needs into AI-first solutions Must-Have Skills & Qualifications Bachelor's degree in engineering/technology. Final year engineering students with relevant skillsets and bandwidth can also apply 0-1 year of industry experience in AI/ML development Strong programming skills in Python, with experience in FastAPI Good understanding of core ML/DL concepts: supervised/unsupervised learning, overfitting, regularization, CNNs, Transformers, etc. Experience with SQL and basic data engineering principles Familiarity with foundational Gen-AI concepts (e.g., embeddings, prompting, fine-tuning) and models (ChatGPT, LLaMA 2/3, Mistral, Claude, etc.) Exposure to frameworks like LangChain or LlamaIndex Understanding of agent-based workflows and AI task automation Analytical mindset and strong debugging/problem-solving skills Good communication and collaboration abilities in a remote work environment Preferred Skills Knowledge of RLHF, prompt engineering, few-shot learning Experience with data labelling, evaluation frameworks, and synthetic data generation Understanding of monitoring tools for model performance, latency, and observability Working knowledge of cloud deployments and containerization (Docker, basic Kubernetes is a plus) Why Join SolvusAI? Work at the cutting edge of applied AI and Gen-AI Collaborate with a high-energy, impact-driven team Flexible remote environment Ownership and visibility across projects from day one Opportunity to shape the future of intelligent automation How to Apply: Excited to kickstart your journey in AI and work on real-world problems? Send us your resume and a short note on why you're interested in this role to careers@solvusai.com after applying on LinkedIn.
Posted 1 month ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position : Environmental Data Scientist Salary: upto ₹50000 pm Location: Ahmedabad [Only preferring candidates from Gujarat] Experience: 2+ Years we are seeking a research-driven Environmental Data Scientist to lead the development of advanced algorithms that enhance the accuracy, reliability, and performance of air quality sensor data. This role goes beyond traditional data science — it focuses on solving real-world challenges in environmental sensing, such as sensor drift, cross-interference, and data anomalies. Key Responsibilities: Design and implement algorithms to improve the accuracy, stability, and interpretability of air quality sensor data (e.g., calibration, anomaly detection, cross-interference mitigation, and signal correction) Conduct in-depth research on sensor behavior and environmental impact to inform algorithm development Collaborate with software and embedded systems teams to integrate these algorithms into cloud or edge-based systems Analyze large, complex environmental datasets using Python, R, or similar tools Continuously validate algorithm performance using lab and field data; iterate for improvement Develop tools and dashboards to visualize sensor behavior and algorithm impact Assist in environmental research projects with statistical analysis and data interpretation Document algorithm design, testing procedures, and research findings for internal use and knowledge sharing Support team members with data-driven insights and code-level contributions as needed Assist other team members with writing efficient code and overcoming programming challenges Education/Experience Required Skills & Qualifications Bachelor’s or Master’s degree in one of the following fields: Environmental Engineering / Science, Chemical Engineering, Electronics / Instrumentation Engineering, Computer Science / Data Science, Physics / Atmospheric Science (with data or sensing background) 1-2 years of hands-on experience working with sensor data or IoT-based environmental monitoring systems Strong knowledge of algorithm development, signal processing, and statistical analysis Proficiency in Python (pandas, NumPy, scikit-learn, etc.) or R, with experience handling real-world sensor datasets Ability to design and deploy models in a cloud or embedded environment. Excellent problem-solving and communication skills. Passion for environmental sustainability and clean-tech. Preferred Qualifications: Familiarity with time-series anomaly detection, sensor fusion, signal noise reduction techniques or geospatial data processing. Exposure to air quality sensor technologies, environmental sensor datasets, or dispersion modeling. For Quick Response, please fill out this fo rm https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform?usp=sharing&ouid=106739769571157586077
Posted 1 month ago
7.0 years
0 Lacs
Gujarat, India
On-site
Job Summary XtraNet Technologies is seeking a highly experienced and results-oriented AI/ML Ops Professional to join our dynamic team in Gandhinagar. In this critical role, you will be instrumental in bridging the gap between our AI/ML development efforts and their seamless deployment and operationalization. You will leverage your deep expertise in developing, deploying, and managing machine learning models and algorithms to solve complex business challenges across diverse domains. As a sector expert, you will provide invaluable guidance on R&D initiatives and the strategic application of AI/ML technologies, including Natural Language Processing (NLP) and Generative AI, to enhance our service offerings and drive innovation for our clients. What You'll Do Lead and participate in the end-to-end lifecycle of machine learning models, from ideation and experimentation to deployment and monitoring in production environments. Design, develop, and implement scalable and efficient machine learning models and algorithms using relevant programming languages (e.g., Python), frameworks (e.g., TensorFlow, PyTorch, scikit-learn), and tools. Develop robust deployment pipelines using MLOps best practices and tools to ensure reliable and automated deployment of AI/ML models. Implement model versioning, rollback strategies, and continuous integration/continuous deployment (CI/CD) for AI/ML workflows. Establish and maintain robust monitoring systems to track the performance, health, and drift of deployed AI/ML models. Implement alerting mechanisms and proactive measures to identify and address potential issues in production AI/ML systems. Optimize the performance and scalability of deployed AI/ML models and infrastructure. Troubleshoot and resolve issues related to deployed AI/ML models and their underlying infrastructure. Serve as a subject matter expert in Artificial Intelligence and Machine Learning, providing guidance and insights to internal teams and clients on the application of AI/ML technologies. Conduct research and development (R&D) activities to explore and evaluate new AI/ML techniques, tools, and platforms, particularly in areas like Natural Language Processing (NLP) and Generative AI. Provide strategic guidance on the usage and implementation of NLP technologies (e.g., text classification, sentiment analysis, information extraction, language modeling) and Generative AI models for various business applications. Stay abreast of the latest advancements in AI/ML, MLOps, and related fields, and proactively share knowledge and best practices within the organization. Leverage your proven track record in solving complex business problems using AI/ML in diverse domains such as natural language processing, computer vision, or reinforcement learning Collaborate with business analysts and domain experts to understand business challenges and identify opportunities for AI/ML-driven solutions. Lead the exploration and prototyping of innovative AI/ML solutions to address client needs and enhance our service offerings. Work closely with data scientists, software engineers, infrastructure teams, and business stakeholders throughout the AI/ML lifecycle. Effectively communicate technical concepts and findings to both technical and non-technical audiences. Participate in technical design discussions and contribute to the overall AI/ML strategy and architecture. Document AI/ML workflows, deployment processes, and monitoring : Educational Background MCA (Master of Computer Applications) OR BE / B.Tech with specialization in Computers / Electronics & Communication (or equivalent) OR M.Sc. in Computer Science / IT Professional Experience Minimum of 7+ years of hands-on experience in developing, deploying, and managing machine learning models and algorithms in production environments. Technical Skills Strong proficiency in programming languages relevant to AI/ML (e.g., Python). Extensive experience with popular AI/ML frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn, Transformers). Solid understanding of machine learning algorithms and techniques across various domains (e.g., supervised learning, unsupervised learning, deep learning, reinforcement learning). Proven experience in developing and deploying NLP models (e.g., using libraries like NLTK, SpaCy, Hugging Face Transformers) and understanding of NLP concepts (e.g., text preprocessing, embeddings, sequence modeling). Familiarity with Generative AI models (e.g., GANs, VAEs, Large Language Models) and their potential applications. Strong understanding of MLOps principles and best practices, including model deployment, monitoring, and lifecycle management. (ref:hirist.tech)
Posted 1 month ago
35.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description F-Secure makes every digital moment more secure, for everyone. For over 35 years, we’ve led the cyber security industry, protecting tens of millions of people online together with our 200+ service provider partners. We value our Fellows' individuality, with an inclusive environment where diversity drives innovation and growth. What makes you unique is what we value – be yourself, that is (y)our greatest asset. Founded in Finland, F‑Secure has offices in Europe, North America and Asia Pacific. About The Role We are looking for skilled Machine Learning Engineers to join our Technology team in Bengaluru! At F-Secure, we're developing cutting-edge AI-powered cybersecurity defenses that protect millions of users globally. Our ML models operate in dynamic environments where threat actors continuously evolve their techniques. We're seeking a motivated individual to perform in-depth analysis of data and machine learning models, develop and implement models using both classical and modern approaches, and optimize models for performance and latency. This is a fantastic opportunity to enhance your skills in a real-world cybersecurity context with significant impact. This role will be located in Bengaluru, India. You can choose whether you work at our Bengaluru office, or in a hybrid mode from your home office. We hope you are able to join us for common gatherings at the Bengaluru office when needed. Key Responsibilities To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What are we looking for? Prior experience from utilizing various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Additional Nice-to-have's Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What will you get from us? You will work together with experienced and enthusiastic colleagues, and within F-Secure you will find some of the best minds in the cyber security industry. We actively encourage our Fellows to grow and develop within F-Secure, and in your career here you can find yourself contributing to any number of our other products and teams. You decide what to make of this role, what your priorities are, and how you organize your work for the best benefit to us all. We offer interesting challenges and a competitive compensation model with wide range of benefits. You get a chance to develop yourself professionally in an international and highly motivated team serving our customers in providing world class security, privacy and uncensored access to information online. You get to work in a flexible, agile, and dynamic working environment that supports individual needs. Giving our people both support and the opportunity to be in charge of their own work is something that is in our DNA. We are in a unique phase in our 30-year history and with curiosity and excitement in the air we see no limits for building a strong and fruitful career with us! A security vetting will possibly be conducted for the selected candidate in accordance to our employment process.
Posted 1 month ago
0 years
6 - 12 Lacs
Indore, Madhya Pradesh, India
On-site
About Us Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance. As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution. What We Build Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics. DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms. ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection. High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion. Evaluation Process HR Discussion – A brief conversation to understand your motivation and alignment with the role. Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach. Take-Home Assignment – Assesses research ability, learning agility, and structured thinking. Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning. Final Interview – A concluding round to explore your background, interests, and team fit in depth. Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision. Job Description : Blockchain Data & ML Engineer As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction. What You’ll Work On Build and maintain ETL pipelines for ingesting and processing blockchain data. Assist in designing, training, and validating machine learning models for prediction and anomaly detection. Evaluate model performance, tune hyperparameters, and document experimental results. Develop monitoring tools to track model accuracy, data drift, and system health. Collaborate with infrastructure and execution teams to integrate ML components into production systems. Design and maintain databases and storage systems to efficiently manage large-scale datasets. Ideal Traits Strong in data structures, algorithms, and core CS fundamentals. Proficiency in any programming language Familiarity with backend systems, APIs, and database design, along with a basic understanding of machine learning and blockchain fundamentals. Curiosity about how blockchain systems and crypto markets work under the hood. Self-motivated, eager to experiment and learn in a dynamic environment. Bonus Points For Hands-on experience with pandas, numpy, scikit-learn, or PyTorch. Side projects involving automated ML workflows, ETL pipelines, or crypto protocols. Participation in hackathons or open-source contributions. What You’ll Gain Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology. Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one. Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions. Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters What We Value Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes. Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here. Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build. Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface. Compensation INR 6 - 12 LPA Performance Bonuses: Linked to contribution, delivery, and impact. Skills:- Python, Machine Learning (ML), pandas, NumPy, Blockchain and ETL
Posted 1 month ago
7.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Summary: We are looking for the Analytics Lead who will be responsible for overseeing the development and integration of AI/ML models across in-house and offshore teams. This role ensures model quality and scalability, fosters effective communication and collaboration, guides model evaluation and performance optimization, mentors team members, and promotes innovation and research in AI/ML techniques. How You’ll Make an Impact (key responsibilities of role) Leading the Development and Integration of AI/ML Models Oversee the AI/ML development process, ensuring both in-house and offshore teams are aligned on objectives, timelines, and technical specifications. Provide guidance to offshore teams on model development, validation, and integration practices, ensuring consistency with internal standards. Ensure that offshore team members have access to the resources and support they need to develop AI/ML models that meet internal expectations. Ensure Model Quality and Scalability Across Teams Validate the models and algorithms developed by both in-house and offshore teams, ensuring they meet business needs and are scalable for deployment. Perform in-depth reviews of model architecture, training data, and feature engineering, providing actionable feedback to offshore teams to improve model performance. Foster Effective Communication and Collaboration Across Teams Establish and maintain clear communication channels between in-house and offshore teams, ensuring everyone is aligned with the project goals and timelines. Regularly sync with both teams to track progress, address any challenges, and ensure that the teams are working towards shared milestones. Leverage collaboration tools and documentation to keep both teams on the same page regarding technical specifications, progress, and deadlines. Guide Model Evaluation and Performance Optimization Develop comprehensive model evaluation strategies, ensuring both in-house and offshore teams follow consistent testing procedures and performance metrics (e.g., precision, recall, F1-score). Provide guidance on identifying and addressing issues like bias, model drift, or overfitting in models developed by offshore teams. Ensure that models are deployed with adequate monitoring in place, and lead efforts to retrain and improve models based on new data or performance feedback. Mentoring and Team Development Mentor both in-house and offshore team members, helping them grow their AI/ML expertise and encouraging continuous professional development. Organize knowledge-sharing sessions between in-house and offshore teams to ensure the effective transfer of knowledge and best practices. Innovation and Research Promote a culture of innovation and research, encouraging both in-house and offshore teams to stay up to date with emerging AI/ML techniques and tools. Lead research efforts to explore new AI/ML methods that can provide a competitive edge, fostering collaboration across teams to apply new ideas in production models. What You Bring (required Qualifications And Skills) Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field. 7+ years extensive experience in AI/ML model development and integration, with a focus on team leadership and collaboration. Strong understanding of model evaluation metrics and performance optimization techniques. Excellent communication and interpersonal skills, with the ability to mentor and guide team members effectively. Experience managing both in-house and offshore teams in a technology-driven environment. Familiarity with collaboration tools and best practices for remote team management. A passion for innovation and staying current with advancements in AI/ML technologies.
Posted 1 month ago
4.0 years
25 - 30 Lacs
India
On-site
Hiring: Senior Data Scientist | Machine Learning Engineer (MLE) Location : Mohali / Pune Experience : 4+ Years Salary : ₹25 LPA – ₹30 LPA Apply at : info@fitb.in Required Skill Sets Expertise in ML/DL , model lifecycle management, and MLOps tools (MLflow, Kubeflow) Proficiency in Python , TensorFlow , PyTorch , Scikit-learn , and Hugging Face Strong background in NLP , including fine-tuning transformer models Hands-on experience with AWS , GCP , or Azure , and deployment tools like SageMaker , Vertex AI Knowledge of Docker , Kubernetes , and CI/CD pipelines Familiarity with distributed computing (Spark, Ray) and vector databases (FAISS, Milvus) Experience with model optimization (quantization, pruning), hyperparameter tuning , and drift detection Roles & Responsibilities Build and maintain end-to-end ML pipelines from data ingestion to deployment Develop, fine-tune, and scale ML models for real-world applications Evaluate models using metrics like F1-score , AUC-ROC , BLEU , etc. Automate retraining, monitoring , and drift detection processes Collaborate with cross-functional teams for seamless ML integration Mentor junior team members and enforce best practices Perks & Benefits Food allowance provided Cab facility available Night shift allowance (NCA) Graduates only may apply 5-day working If you meet the above criteria and are passionate about building impactful ML systems, send your resume to info@fitb.in Job Types: Full-time, Permanent, Fresher Pay: ₹2,500,000.00 - ₹3,000,000.00 per year Benefits: Flexible schedule Food provided Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Fixed shift Monday to Friday Night shift Rotational shift US shift Supplemental Pay: Performance bonus Shift allowance Yearly bonus Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
📍 Location: India / Remote (WFH) 🕒 Schedule: Mon–Fri, 10 AM–6 PM 📅 Type: Full-Time 💼 Experience: 3–5 Years About the Role We’re hiring a Senior AI/ML Engineer to lead the design, development, and deployment of scalable AI solutions—including Generative AI, LLMs, and sustainability-driven ML systems . This is a high-impact, hands-on role where you’ll collaborate cross-functionally, own the end-to-end ML lifecycle, and build products that deliver real-world business value across industries like FinTech, HealthTech, and more. Company Description SynerPeak HR Consultants specializes in connecting exceptional talent with forward-thinking companies. They help organizations scale by delivering top-tier professionals who align with their culture, values, and goals. With expertise in various industries, SynerPeak offers end-to-end recruitment solutions, executive search, talent mapping, employer branding consultation, and contract-based hiring. What You'll Bring Strong command of Python, SQL , and ML frameworks: scikit-learn, TensorFlow, PyTorch Experience with ML pipelines (Airflow, MLflow, Kubeflow) and cloud platforms (AWS/GCP/Azure) Skilled in model deployment , MLOps , containerization (Docker, Kubernetes), CI/CD Familiar with deep learning , LLMs , RAG , and Agentic AI frameworks (LangChain, OpenAI API) Key Responsibilities ML & GenAI Engineering Build and deploy models for NLP, recommendation, classification, etc. Fine-tune LLMs and implement RAG-based multi-agent GenAI systems Manage full ML lifecycle: deployment, monitoring, optimisation, drift detection Handle data preprocessing, feature engineering, and model evaluation Work with cross-functional teams and contribute to product decisions Convert business goals into AI-driven technical solutions Stay ahead in AI/ML research and rapidly apply new techniques You’re a Great Fit If You... Take initiative and own your decisions Know when not to use GenAI Can clearly explain trade-offs to engineers & business stakeholders Why Join Us? Flexibility & Time Off Flexible hours Paid leave (sick, personal, bereavement) Fully paid insurance 6 months maternity & 3 months paternity leave Extras Team events (virtual + in-person) Equity options & competitive salary What You'll Work On 50% : Building robust, production-grade AI systems 25% : Prototyping and researching innovative ML/GenAI ideas 25% : Collaborating with stakeholders to solve real-world business challenges Ready to lead real-world AI innovation? Apply now and be part of something transformative. Please send your updated Resume and cover letter to hr@synerpeak.com
Posted 1 month ago
0.0 years
0 Lacs
Noida
On-site
Senior Executive EXL/SE/1407664 Digital SolutionsNoida Posted On 30 Jun 2025 End Date 14 Aug 2025 Required Experience 0 - 2 Years Basic Section Number Of Positions 2 Band A2 Band Name Senior Executive Cost Code D012603 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 150000.0000 - 1000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB Digital Solutions SBU SmartAudit.AI & Data Loops Country India City Noida Center Noida - Centre 59 Skills Skill SQL STAKEHOLDER MANAGEMENT Minimum Qualification B.TECH/B.E Certification No data available Job Description Job Title: AI Operations Engineer – GenAI & Traditional Models Location: NCR/Bangalore Experience Level: 1–2 years (Entry to Intermediate) About the Role: We are seeking a hands-on, curious, and impact-driven AI Operations Engineer to join our Run & Maintain team for supporting production-grade Generative AI and Traditional ML models. This role is perfect for early-career Data Scientists or ML Engineers eager to step into the dynamic world of GenAI while deepening their skills in AI model lifecycle management, monitoring, and improvement. You’ll be on the frontlines—keeping AI systems healthy, reliable, and continually improving. Key Responsibilities: Monitor and maintain health of production AI models (GenAI and traditional ML). Troubleshoot data/model/infra issues across model pipelines, APIs, embeddings, and prompt systems. Collaborate with Engineering and Data Science teams to deploy new versions and manage rollback if needed. Implement automated logging, alerting, and retraining pipelines. Handle prompt performance drift, input/output anomalies, latency issues, and quality regressions. Analyze feedback and real-world performance to propose model or prompt enhancements. Conduct A/B testing, manage baseline versioning and monitor model outputs over time. Document runbooks, RCA reports, model lineage and operational dashboards. Support GenAI adoption by assisting in evaluations, hallucination detection, and prompt optimization. Must-have Skills: 1+ year of experience in Data Science, ML, or MLOps. Good grasp of ML lifecycle, model versioning, and basic monitoring principles. Strong Python skills with exposure to ML frameworks (scikit-learn, pandas, etc.). Basic familiarity with LLMs and interest in GenAI (OpenAI, Claude, etc.). Exposure to AWS/GCP/Azure or any MLOps tooling. Comfortable reading logs, parsing metrics, and triaging issues across the stack. Eagerness to work in a production support environment with proactive ownership. Nice-to-Have Skills: Prompt engineering knowledge (system prompts, temperature, tokens, etc.). Hands-on with vector stores, embedding models, or LangChain/LlamaIndex. Experience with tools like MLflow, Prometheus, Grafana, Datadog, or equivalent. Basic understanding of retrieval pipelines or RAG architectures. Familiarity with CI/CD and containerization (Docker, GitHub Actions). Ideal Candidate Profile: A strong starter who wants to go beyond notebooks and see AI in action. Obsessed with observability, explainability, and zero-downtime AI. Wants to build a foundation in GenAI while leveraging their traditional ML skills. A great communicator who enjoys cross-functional collaboration. Workflow Workflow Type Digital Solution Center
Posted 1 month ago
0 years
0 Lacs
India
On-site
*Who you are* You’re the person whose fingertips know the difference between spinning up a GPU cluster and spinning down a stale inference node. You love the “infrastructure behind the magic” of LLMs. You've built CI/CD pipelines that automatically version models, log inference metrics, and alert on drift. You’ve containerized GenAI services in Docker, deployed them on Kubernetes clusters (AKS or EKS), and implemented terraform or ARM to manage infra-as-code. You monitor cloud costs like a hawk, optimize GPU workloads, and sometimes sacrifice cost for performance—but never vice versa. You’re fluent in Python and Bash, can script tests for REST endpoints, and build automated feedback loops for model retraining. You’re comfortable working in Azure — OpenAI, Azure ML, Azure DevOps Pipelines—but are cloud-agnostic enough to cover AWS or GCP if needed. You read MLOps/LLMOps blog posts or arXiv summaries on the weekend and implement improvements on Monday. You think of yourself as a self-driven engineer: no playbooks, no spoon-feeding—just solid automation, reliability, and a hunger to scale GenAI from prototype to production. --- *What you will actually do* You’ll architect and build deployment platforms for internal LLM services: start from containerizing models and building CI/CD pipelines for inference microservices. You’ll write IaC (Terraform or ARM) to spin up clusters, endpoints, GPUs, storage, and logging infrastructure. You’ll integrate Azure OpenAI and Azure ML endpoints, pushing models via pipelines, versioning them, and enabling automatic retraining triggers. You’ll build monitoring and observability around latency, cost, error rates, drift, and prompt health metrics. You’ll optimize deployments—autoscaling, use of spot/gpu nodes, invalidation policies—to balance cost and performance. You’ll set up automated QA pipelines that validate model outputs (e.g. semantic similarity, hallucination detection) before merging. You’ll collaborate with ML, backend, and frontend teams to package components into release-ready backend services. You’ll manage alerts, rollbacks on failure, and ensure 99% uptime. You'll create reusable tooling (CI templates, deployment scripts, infra modules) to make future projects plug-and-play. --- *Skills and knowledge* Strong scripting skills in Python and Bash for automation and pipelines Fluent in Docker, Kubernetes (especially AKS), containerizing LLM workloads Infrastructure-as-code expertise: Terraform (Azure provider) or ARM templates Experience with Azure DevOps or GitHub Actions for CI/CD of models and services Knowledge of Azure OpenAI, Azure ML, or equivalent cloud LLM endpoints Familiar with setting up monitoring: Azure Monitor, Prometheus/Grafana—track latency, errors, drift, costs Cost-optimization tactics: spot nodes, autoscaling, GPU utilization tracking Basic LLM understanding: inference latency/cost, deployment patterns, model versioning Ability to build lightweight QA checks or integrate with QA pipelines Cloud-agnostic awareness—experience with AWS or GCP backup systems Comfortable establishing production-grade Ops pipelines, automating deployments end-to-end Self-starter mentality: no playbooks required, ability to pick up new tools and drive infrastructure independently
Posted 1 month ago
9.0 - 13.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: AI Operations Engineer – GenAI & Traditional Models Location: NCR/Bangalore Experience Level: 1–2 years (Entry to Intermediate) About The Role We are seeking a hands-on, curious, and impact-driven AI Operations Engineer to join our Run & Maintain team for supporting production-grade Generative AI and Traditional ML models. This role is perfect for early-career Data Scientists or ML Engineers eager to step into the dynamic world of GenAI while deepening their skills in AI model lifecycle management, monitoring, and improvement. You’ll be on the frontlines—keeping AI systems healthy, reliable, and continually improving. Key Responsibilities Monitor and maintain health of production AI models (GenAI and traditional ML). Troubleshoot data/model/infra issues across model pipelines, APIs, embeddings, and prompt systems. Collaborate with Engineering and Data Science teams to deploy new versions and manage rollback if needed. Implement automated logging, alerting, and retraining pipelines. Handle prompt performance drift, input/output anomalies, latency issues, and quality regressions. Analyze feedback and real-world performance to propose model or prompt enhancements. Conduct A/B testing, manage baseline versioning and monitor model outputs over time. Document runbooks, RCA reports, model lineage and operational dashboards. Support GenAI adoption by assisting in evaluations, hallucination detection, and prompt optimization. Must-have Skills 1+ year of experience in Data Science, ML, or MLOps. Good grasp of ML lifecycle, model versioning, and basic monitoring principles. Strong Python skills with exposure to ML frameworks (scikit-learn, pandas, etc.). Basic familiarity with LLMs and interest in GenAI (OpenAI, Claude, etc.). Exposure to AWS/GCP/Azure or any MLOps tooling. Comfortable reading logs, parsing metrics, and triaging issues across the stack. Eagerness to work in a production support environment with proactive ownership. Nice-to-Have Skills Prompt engineering knowledge (system prompts, temperature, tokens, etc.). Hands-on with vector stores, embedding models, or LangChain/LlamaIndex. Experience with tools like MLflow, Prometheus, Grafana, Datadog, or equivalent. Basic understanding of retrieval pipelines or RAG architectures. Familiarity with CI/CD and containerization (Docker, GitHub Actions). Ideal Candidate Profile A strong starter who wants to go beyond notebooks and see AI in action. Obsessed with observability, explainability, and zero-downtime AI. Wants to build a foundation in GenAI while leveraging their traditional ML skills. A great communicator who enjoys cross-functional collaboration.
Posted 1 month ago
9.0 - 13.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
9.0 - 13.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Consulting – AI Enabled Automation -GenAI/Agentic – Manager We are looking to hire people with strong AI Enabled Automation skills and who are interested in applying AI in the process automation space – Azure, AI, ML, Deep Learning, NLP, GenAI , large Lang Models(LLM), RAG ,Vector DB , Graph DB, Python. At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Responsibilities: Development and implementation of AI enabled automation solutions, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of AI enabled automation applications. Ensure seamless integration of optimized solutions into the overall product or system Collaborate with cross-functional teams to understand requirements, to integrate solutions into cloud environments (Azure, GCP, AWS, etc.) and ensure it aligns with business goals and user needs Educate team on best practices and keep updated on the latest tech advancements to bring innovative solutions to the project Technical Skills Requirements 9 to 13 years of relevant professional experience Proficiency in Python and frameworks like PyTorch, TensorFlow, Hugging Face Transformers. Strong foundation in ML algorithms, feature engineering, and model evaluation.- Must Strong foundation in Deep Learning, Neural Networks, RNNs, CNNs, LSTMs, Transformers (BERT, GPT), and NLP.-Must Experience in GenAI technologies — LLMs (GPT, Claude, LLaMA), prompting, fine-tuning. Experience with LangChain, LlamaIndex, LangGraph, AutoGen, or CrewAI.(Agentic Framework) Knowledge of retrieval augmented generation (RAG) Knowledge of Knowledge Graph RAG Experience with multi-agent orchestration, memory, and tool integrations Experience/Implement MLOps practices and tools (CI/CD for ML, containerization, orchestration, model versioning and reproducibility)--Good to have Experience with cloud platforms (AWS, Azure, GCP) for scalable ML model deployment. Good understanding of data pipelines, APIs, and distributed systems. Build observability into AI systems — latency, drift, performance metrics. Strong written and verbal communication, presentation, client service and technical writing skills in English for both technical and business audiences. Strong analytical, problem solving and critical thinking skills. Ability to work under tight timelines for multiple project deliveries. What we offer: At EY GDS, we support you in achieving your unique potential both personally and professionally. We give you stretching and rewarding experiences that keep you motivated, working in an atmosphere of integrity and teaming with some of the world's most successful companies. And while we encourage you to take personal responsibility for your career, we support you in your professional development in every way we can. You enjoy the flexibility to devote time to what matters to you, in your business and personal lives. At EY you can be who you are and express your point of view, energy and enthusiasm, wherever you are in the world. It's how you make a difference. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
10.0 years
0 Lacs
Guindy, Tamil Nadu, India
On-site
Position Overview The Director of Infrastructure, Cloud, and ML Operations is a critical role within Invent Health, responsible for architecting, scaling, securing, and managing the company’s technology platforms. This position blends technical expertise, strategic vision, and people leadership to ensure that all infrastructure and machine learning operations are robust, scalable, and aligned with the company’s growth ambitions. Key ResponsibilitiesInfrastructure Strategy & Architecture · Develop and execute a forward-looking infrastructure strategy across cloud and network environments, ensuring seamless alignment with business objectives and scalability requirements. · Design and maintain modern, scalable, and secure architectures supporting multi-tenancy, high availability, and disaster recovery. · Architect, implement, and operate cloud platforms (IaaS, PaaS, SaaS) for high performance, scalability, and security. Operational Excellence & Reliability · Ensure the operational reliability, uptime, and service availability of the SaaS platform, including rapid incident response and robust observability practices. · Implement monitoring and incident management best practices, utilizing KPIs to drive continuous improvement. DevOps & Automation incl. Cloud Ops · Advance DevOps practices by driving automation throughout CI/CD pipelines and adopting Infrastructure as Code (IaC) methodologies to accelerate deployments and enhance engineering efficiency. · Implement standardized processes for service provisioning, deployment, and maintenance, blending DevOps and SRE principles. · Automate cloud provisioning and management using IaC and DevOps methodologies to streamline deployments and reduce manual intervention. Customer Support & Incident Management · Diagnose and resolve customer issues and lead incident management processes. · Conduct root-cause analyses and oversee the remediation of critical incidents impacting service availability. Security & Compliance · Enforce security best practices and ensure compliance with relevant industry standards like SOC-2, HIPAA, HITRUST etc. · Implement and oversee security controls and compliance policies within cloud environments. · Lead disaster recovery planning, business continuity, and develop governance frameworks. Cross-functional Collaboration · Work closely with product, engineering, and executive teams to inform architectural decisions and support key company initiatives. · Manage vendor relationships, infrastructure budgets, and effectively communicate platform health and project outcomes to leadership. Team Leadership & Development · Build, mentor, and manage a high-performing engineering team, fostering a culture of innovation, accountability, and operational excellence. Specialized Operations AreaMachine Learning Operations (ML Ops) · Design and maintain ML infrastructure supporting the entire machine learning lifecycle: data ingestion, model training, validation, deployment, and monitoring. · Automate ML workflows with CI/CD pipelines tailored to machine learning models for reproducibility and reliability. · Monitor deployed models for performance, drift, and data quality, implementing alerting and retraining as required. · Ensure security, compliance, and governance of ML data, models, and endpoints in production. · Collaborate with data science and engineering teams to ensure smooth handoffs and operational support. · Optimize resource allocation for ML workloads, balancing cost, performance, and scalability in cloud-based environments. Qualifications · 10+ years’ experience in infrastructure, cloud, and (preferred 4 + years’ experience in machine learning operations). · Proficiency in cloud platforms (AWS preferred) DevOps tools, and Infrastructure as Code. · Strong knowledge of modern security and compliance standards for SaaS and ML operations. · Demonstrated success in team leadership and cross-functional collaboration. · Clear communication skills and the ability to translate technical concepts into business outcomes.
Posted 1 month ago
5.0 years
3 - 8 Lacs
Mohali
On-site
Job description Core Responsibilities Project Leadership: Own the end-to-end lifecycle of AI/ML projects—from scoping and prototyping to deployment and monitoring. Team Management: Lead, mentor, and grow a team of ML engineers, data scientists, and researchers. Model Development: Design, train, and evaluate machine learning and deep learning models for predictive analytics, NLP, CV, recommendation systems, etc. MLOps: Oversee model deployment, versioning, monitoring, and continuous training in production environments. Cross-Functional Collaboration: Work closely with data engineers, software developers, product managers, and business stakeholders. Research & Innovation: Evaluate cutting-edge AI/ML research and integrate suitable technologies into company products and platforms. Compliance & Ethics: Ensure responsible AI practices, addressing fairness, bias, interpretability, and compliance requirements. Technical Skills Programming: Expert in Python (NumPy, pandas, scikit-learn), and familiar with C++, R, or Java if needed. Frameworks: TensorFlow, PyTorch, Keras, Hugging Face Transformers. Data Tools: SQL, Spark, Kafka, Airflow. Cloud & DevOps: AWS/GCP/Azure, Docker, Kubernetes, MLflow, SageMaker, Vertex AI. Techniques: Supervised/unsupervised learning, deep learning, NLP, computer vision, reinforcement learning. MLOps: CI/CD for ML, monitoring (Prometheus, Grafana), feature stores, model drift detection. Typical Background 5+ years experience in AI/ML development. 2+ years in a leadership or team lead role. Job Type: Full-time Benefits: Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Work Location: In person Speak with the employer +91 8146237069 Application Deadline: 05/07/2025
Posted 1 month ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI/ML Engineer/Manager: Location – Pune Experience: 6+ years Notice period – Immediate to 30 days. Key Responsibilities: Lead the development of machine learning PoCs and demos using structured/tabular data for use cases such as forecasting, risk scoring, churn prediction, and optimization. Collaborate with sales engineering teams to understand client needs and present ML solutions during pre-sales calls and technical workshops. Build ML workflows using tools such as SageMaker, Azure ML, or Databricks ML and manage training, tuning, evaluation, and model packaging. Apply supervised, unsupervised, and semi-supervised techniques such as XGBoost, CatBoost, k-Means, PCA, time-series models, and more. Work with data engineering teams to define data ingestion, preprocessing, and feature engineering pipelines using Python, Spark, and cloud-native tools. Package and document ML assets so they can be scaled or transitioned into delivery teams post-demo. Stay current with best practices in ML explainability , model performance monitoring , and MLOps practices. Participate in internal knowledge sharing, tooling evaluation, and continuous improvement of lab processes. Qualifications: 8+ years of experience developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python , pandas , scikit-learn , and ML libraries such as XGBoost, CatBoost , LightGBM, etc. Familiarity with cloud-based ML environments such as AWS SageMaker , Azure ML , or Databricks . Solid understanding of feature engineering, model tuning, cross-validation, and error analysis . Experience with unsupervised learning , clustering, anomaly detection, and dimensionality reduction techniques. Comfortable presenting models and insights to technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts , including model versioning, deployment automation, and drift detection. Interested candidates shall apply or share resumes at kanika.garg@austere.co.in.
Posted 1 month ago
0.6 - 2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Project Manager — IDfy We don’t need experience; we need relentless execution. Role Overview: Support project delivery by coordinating teams, tracking progress, and clearing roadblocks. Learn on the job. Deliver on time. Own your part like a pro. Key Responsibilities: Assist in planning and executing projects under senior guidance. Communicate effectively with cross-functional teams to keep projects moving. Monitor timelines and raise flags early when things drift off course. Manage project documentation and action items with discipline. Participate in meetings, track decisions, and drive follow-ups. Learn project management tools and frameworks on the job. Adapt quickly and maintain urgency in a fast-paced environment. Qualifications: Bachelor’s degree in any field. 0.6-2 years of experience; willingness to learn is non-negotiable. Strong organizational and communication skills. Proactive, detail-oriented, and accountable. Comfortable working under pressure and managing multiple priorities.
Posted 1 month ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Bureau is an all-in-one platform for identity decisioning, fraud prevention and compliance requirements. Trusted for enhancing security and compliance, Bureau simplifies identity management for businesses. This is a place where we celebrate homegrown leaders, and have an open-door policy where your voice matters, your ideas flourish, and your potential knows no bounds. We are driven to put our best foot forward everyday with confidence, growth, customer obsession and speed as our core values here at Bureau. Think of us as a launching pad for your growth. Come join us and help restore trust in online interactions! Bureau.id’s Alternate Data Pod is where raw signal meets real intelligence. From mule fraud detection and risk scoring models to synthetic ID prevention, this pod is pioneering India-first and global-ready APIs. We’re looking for a PM who thinks monetisation, breathes data, and can ruthlessly prioritize what ships next. This isn’t a vanilla PM role — it’s part-data detective, part-solution engineer, and part-market translator. Location: Bangalore, India How will your day look like at Bureau? Product Operations & Enablement (40%) Own API product health & dashboards (Uptime, Errors, Feedback Loops, Logging) Track & resolve API or model drift issues with DS/infra teams Manage and support PoCs, client onboarding, and internal stakeholders (Customer Success, Sales Engineering) Document playbooks, FAQs, and API changelogs for partners and customers Use Case Discovery & Model Expansion (40%) Hunt for new product use cases (e.g. UPI fraud, Insurance synthetic IDs, Affluence signals, Data Diversity and Enrichment) Collaborate with Data Scientists to translate client questions into product specs Run data evaluations, model comparisons, or internal benchmarks Build scalable labeling or taxonomy systems (SA/CA/UPI types, fraud tags, etc.) Identify & onboard new alt-data sources (e.g. collections, telco, app behavior, etc.) GTM & Strategic Delivery (20%) Support go-to-market plans with content, positioning, and demos Collaborate with GTM on partner pitches (Banks, NBFCs, marketplaces, insurers, wallets) Track pricing, usage metrics, customer feedback to evolve product roadmap What does it take to be in this role? 3–6 years of Product Management or Solutioning experience in: SaaS API products, preferably in fintech, credit, or fraud Data products / analytics platforms Comfort with data: can query, visualize, and interpret results Experience working with Data Scientists, backend engineers, and GTM teams A bias to execute, test hypotheses, and unblock decisions fast Excellent documentation & articulation (you explain complex models simply) Why should you choose us? Your growth is our responsibility. We emphasise on learning and development over material perks and are happier to nourish your mind. If there's a book, course, or program that enhances your work at Bureau, feel free to pursue it—we'll take care of the financial aspect. We believe in flat structures While we do have designations and reporting managers, our structure fosters a lot more freedom. You can collaborate with anyone, explore job rotations, transition between different projects, and express your opinions openly to whomever you choose. Homegrown Leaders Our nurturing environment and specialized programs, like ElevateEngg, have led to success stories where even interns grow into impactful leadership roles over time. FAQs: What is our hiring process like? We start with a friendly chat to get to know each other and align goals. Then, we’ll have 2-3 discussions where we’ll dive into real-world examples to explore your skills. Finally, we’ll make sure you’re a great fit with our culture and values. How can I improve my chances of getting hired? Get to know Bureau’s mission and what we’re all about. Understand the role, and think about how your past work connects with it. Keep your resume simple, clear, and to the point (2 pages or less) to highlight your skills and experience. What is Bureau’s approach to diversity and inclusion? We believe in a diverse and inclusive culture where everyone’s voice matters. We focus on diverse referrals, inclusive hiring, and offer special leaves to support our team. Our goal is for everyone to feel valued and empowered to grow with us. What learning and growth opportunities can I expect at Bureau? At Bureau, we’re all about growth. You’ll have access to learning resources, mentorship, and exciting projects that help you level up in your career. We’re committed to helping you grow and encourage continuous learning along the way.
Posted 1 month ago
9.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description This is a remote position. Job Description We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough