Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
2 - 13 Lacs
India
On-site
Role Overview We are seeking a skilled and self-motivated AI/ML Engineer to join our growing team. You will be responsible for designing, developing, training, and deploying machine learning models and AI systems to solve practical, real-life problems. Key Responsibilities Design and implement machine learning models for real-world business applications. Analyze and preprocess large datasets from various structured and unstructured sources. Train, validate, and fine-tune models using classical ML and/or deep learning methods. Collaborate with product and engineering teams to integrate models into production systems. Build end-to-end ML pipelines (data ingestion → model training → deployment). Monitor and improve model performance over time with live feedback data. Document model architecture, performance metrics, and deployment processes. Required Skills and Experience 3–5 years of hands-on experience in AI/ML engineering. Strong knowledge of machine learning algorithms (classification, regression, clustering, etc.). Experience with deep learning frameworks (TensorFlow, PyTorch, Keras). Proficient in Python, with experience in libraries like scikit-learn, pandas, NumPy, etc. Experience with NLP, computer vision, or time-series models is a plus. Understanding of MLOps practices and tools (MLflow, DVC, Docker, etc.). Exposure to deploying ML models via REST APIs or cloud services (AWS/GCP/Azure). Familiarity with data versioning, model monitoring, and re-training workflows. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Data Science, AI, or a related field. Published work on GitHub or contributions to open-source AI/ML projects. Certifications in AI/ML, cloud computing, or data engineering. Contact US Email: careers@crestclimbers.com Phone: +91 94453 30496 Website: www.crestclimbers.com Office: Kodambakkam, Chennai Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person Job Types: Full-time, Permanent Pay: ₹298,197.62 - ₹1,398,461.03 per year Work Location: In person Expected Start Date: 21/07/2025
Posted 2 weeks ago
6.0 years
0 Lacs
India
Remote
Location: Remote / Hybrid Experience: 2–6 years (or strong project/internship experience) Employment Type: Full-Time Department: AI & Software Systems Key Responsibilities Design and maintain end-to-end MLOps pipelines: from data ingestion to model deployment and monitoring. Containerize ML models and services using Docker for scalable deployment. Develop and deploy APIs using FastAPI to serve real-time inference for object detection, segmentation, and mapping tasks. Automate workflows using CI/CD tools like GitHub Actions or Jenkins. Manage cloud infrastructure on AWS: EC2, S3, Lambda, SageMaker, CloudWatch, etc. Collaborate with AI and GIS teams to integrate ML outputs into mapping dashboards. Implement model versioning using DVC/Git, and maintain structured experiment tracking using MLflow or Weights & Biases. Ensure secure, scalable, and cost-efficient model hosting and API access. Required Skills Programming: Python (must), Bash/Shell scripting ML Frameworks: PyTorch, TensorFlow, OpenCV MLOps Tools: MLflow, DVC, GitHub Actions, Docker (must), Kubernetes (preferred) Cloud Platforms: AWS (EC2, S3, SageMaker, IAM, Lambda) API Development: FastAPI (must), Flask (optional) Data Handling: NumPy, Pandas, GDAL, Rasterio Monitoring: Prometheus, Grafana, AWS CloudWatch Preferred Experience Hands-on with AI/ML models for image segmentation, object detection (YOLOv8, U-Net, Mask R-CNN). Experience with geospatial datasets (satellite imagery, drone footage, LiDAR). Familiarity with PostGIS, QGIS, or spatial database management. Exposure to DevOps principles and container orchestration (Kubernetes/EKS). Soft Skills Problem-solving mindset with a system design approach. Clear communication across AI, software, and domain teams. Ownership of the full AI deployment lifecycle. Education Bachelor’s or Master’s in Computer Science, Data Science, AI, or equivalent. Certifications in AWS, MLOps, or Docker/Kubernetes (bonus).
Posted 2 weeks ago
8.0 - 11.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles & Responsibilities Key Responsibilities Design, develop, and optimize Machine Learning & Deep Learning models using Python and libraries such as TensorFlow, PyTorch, and Scikit-learn Work with Large Language Models (e.g., GPT, BERT, T5) to solve NLP tasks such as, semantic search, summarization, chatbots, conversational agents, and document intelligence. Lead the development of scalable AI solution including data preprocessing, embedding generation, vector search, and prompt orchestration. Build and manage vector databases and metadata stores to support high-performance semantic retrieval and contextual memory. Implement caching, queuing, and background processing systems to ensure performance and reliability at scale. Conduct independent R&D to implement cutting-edge AI methodologies, evaluate open-source innovations, and prototype experimental solutions Apply predictive analytics and statistical techniques to mine actionable insights from structured and unstructured data. Build and maintain robust data pipelines and infrastructure for end-to-end ML model training, testing, and deployment Collaborate with cross-functional teams to integrate AI solutions into business processes Contribute to the MLOps lifecycle, including model versioning, CI/CD, performance monitoring, retraining strategies, and deployment automation Stay updated with the latest developments in AI/ML by reading academic papers, and experimenting with novel tools or frameworks Required Skills & Qualifications Proficient in Python, with hands-on experience in key ML libraries: TensorFlow, PyTorch, Scikit-learn, and HuggingFace Transformers Strong understanding of machine learning fundamentals, deep learning architectures (CNNs, RNNs, transformers), and statistical modeling Practical experience working with and fine-tuning LLMs and foundation models Deep understanding of vector search, embeddings, and semantic retrieval techniques. Expertise in predictive modeling, including regression, classification, time series, clustering, and anomaly detection Comfortable working with large-scale datasets using Pandas, NumPy, SciPy etc. Experience with cloud platforms (AWS, GCP, or Azure) for training and deployment is a plus Preferred Qualifications Master’s or Ph.D. in Computer Science, Machine Learning, Data Science, or related technical discipline. Experience with MLOps tools and workflows (e.g., Docker, Kubernetes, MLflow, SageMaker, Vertex AI). Ability to build and expose APIs for models using FastAPI, Flask, or similar frameworks. Familiarity with data visualization (Matplotlib, Seaborn) and dashboarding (Plotly) tools or equivalent Working knowledge of version control, experiment tracking, and team collaboration Experience 8-11 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): TensorFlow, NLP, Pytorch, Large Language Models (LLM) About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
CSQ326R201 Mission At Databricks, we are on a mission to empower our customers to solve the world's toughest data problems by utilising the Data Intelligence platform. As a Scale Solution Engineer, you will play a critical role in advising Customers in their onboarding journey. You will directly work with customers to help them onboard and deploy Databricks in their Production environment. The Impact You Will Have You will ensure new customers have an excellent experience by providing them with technical assistance early in their journey. You will become an expert on the Databricks Platform and guide customers in making the best technical decisions to achieve their goals. You will work on multiple tactical customers to track and report their progress. What We Look For 2+ years of industry experience Early-career technical professional ideally in data-driven or cloud-based roles. Knowledge of at least one of the public cloud platforms AWS, Azure, or GCP is required. Knowledge of a programming language - Python, Scala, or SQL Knowledge of end-to-end data analytics workflow Hands-on professional or academic experience in one or more of the following: Data Engineering technologies (e.g., ETL, DBT, Spark, Airflow) Data Warehousing technologies (e.g., SQL, Stored Procedures, Redshift, Snowflake) Excellent time management & presentation skills Bonus - Knowledge of Data Science and Machine Learning (e.g., build and deploy ML Models) About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 weeks ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
P-926 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. Databricks Mosaic AI offers a unique data-centric approach to building enterprise-quality, Machine Learning and Generative AI solutions, enabling organizations to securely and cost-effectively own and host ML and Generative AI models, augmented or trained with their enterprise data. And we're only getting started in Bengaluru , India - and currently in the process of setting up 10 new teams from scratch ! As a Senior Software Engineer at Databricks India, you can get to work across : Backend DDS (Distributed Data Systems) Full Stack The Impact You'll Have Our Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as: Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark™, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage. Our DDS team spans across: Apache Spark™ Data Plane Storage Delta Lake Delta Pipelines Performance Engineering As a Full Stack software engineer, you will work closely with your team and product management to bring that delight through great user experience. What We Look For BS (or higher) in Computer Science, or a related field 7+ years of production level experience in one of: Python, Java, Scala, C++, or similar language. Experience developing large-scale distributed systems from scratch Experience working on a SaaS platform or with Service-Oriented Architectures. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
GAQ225R106 We are seeking an experienced and detail-oriented Accounting Manager to lead our accounting operations in India. This role will oversee financial reporting, compliance, and process optimization while building a strong accounting team to support our international operations. The ideal candidate will have a deep understanding of Indian accounting regulations, US GAAP, and corporate compliance requirements. The role will report to the Director of International Accounting. The Impact You Will Have Oversee all monthly and quarterly accounting processes for the Indian subsidiary, ensuring accuracy and timeliness Manage compliance requirements, including statutory filings and tax regulations while maintaining strong vendor relationships Build and lead an accounting team in India to support international financial operations and ensure seamless coordination with global finance teams Ensure strict adherence to company group accounting policies and the correct application of US GAAP Support external audit requirements by providing accurate financial data and ensuring compliance in assigned areas of responsibility Conduct financial statement analysis, identifying key fluctuations and providing meaningful insights to assist management in decision-making Implement best practices for financial efficiency, process automation, and internal controls Collaborate with cross-functional teams to enhance financial reporting, analysis, and business performance insights Oversee and participate in quarterly and annual audits, working closely with external auditors to ensure compliance and accuracy Drive ad hoc financial projects as needed to support the company’s growth and strategic initiatives What We Look For Bachelor’s or Master’s degree in Accounting, Finance, or a related field Professional qualification (CA, ACCA, ICAEW or similar) Overall 10+ years of experience along with 8+ years of operational accounting experience 3+ years of experience managing a team Operational accounting experience in a growing SAAS technology business Excellent organizational and time management skills Strong knowledge of and experience with tools such as Netsuite, FloQast, and Coupa Team player with excellent communication skills and a desire for innovation Ability to build relationships across organizations Strong knowledge of Indian accounting standards, tax laws, and compliance regulations Experience with US GAAP Detail-oriented with a desire for accuracy, analytics, and a stellar customer service approach About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. Ignite the Future of Language with AI at Teradata! What You'll Do: Shape the Way the World Understands Data Are you ready to be at the forefront of a revolution? At Teradata, we're not just managing data; we're unlocking its hidden potential through the power of Artificial Intelligence and Machine Learning. As a key member of our innovative AI/ML team, you'll be architecting, building, and deploying cutting-edge software solutions that will literally transform languages within the Teradata Vantage platform – a cornerstone of our strategic vision and a powerhouse in the analytics world. Dive deep into the performance DNA of AI/ML applications. You'll be the detective, identifying and crushing bottlenecks to ensure our solutions not only scale massively but also deliver lightning-fast results. Your mission? To champion quality at every stage, tackling the unique and exhilarating challenges presented by AI/ML in the cloud. Become an integral part of a brilliant, tightly-knit team where collaboration isn't just a buzzword – it's how we create world-class, enterprise-grade software that pushes boundaries. You'll be a knowledge champion, diving into the intricacies of our domain, crafting compelling documentation, and sharing your expertise to inspire other teams. Your proficiency in Python, Java, Go, C++, along with Angular for frontend development, will be instrumental in delivering high-impact, full stack software that performs seamlessly, ensures long-term durability, optimizes costs, and upholds the highest standards of security. Unleash your inner API artisan! We're looking for someone with a genuine passion for crafting incredibly simple yet powerfully functional APIs that will be the backbone of our intelligent systems. Ready to paint the web with pixel-perfect magic? We're looking for a full stack developer who lives for clean code but dreams in UI – if crafting seamless, stunning frontends is your jam, this is your playground! Step into an agile, dynamic environment that feels like a startup but with the backing of an industry leader. You'll thrive on rapidly evolving business needs, directly impacting our trajectory and delivering quality solutions with speed and precision. Get ready to explore uncharted territories, creatively solve complex puzzles, and directly contribute to groundbreaking advancements. Who You'll Work With: Join Forces with the Best Imagine collaborating daily with some of the brightest minds in the company – individuals who champion diversity, equity, and inclusion as fundamental to our success. You'll be part of a cohesive force, laser-focused on delivering high-quality, critical, and highly visible AI/ML functionality within the Teradata Vantage platform. Your insights will directly shape the future of our intelligent data solutions. You'll report directly to the inspiring Sr. Manager, Software Engineering, who will champion your growth and empower your contributions. What Makes You a Qualified Candidate: Skills That Deliver Impact You bring 5+ years of industry experience in the exciting world of software development and operating software systems that can handle massive scale. Your mastery of Java with the Spring Framework and Angular—amplified by expertise in AI/ML, Kubernetes, microservices architecture, and DevOps methodologies—makes you a full-stack powerhouse poised to shape the future of technology. Bonus points for proficiency in Go, Python, FastAPI, or other object-oriented languages—the more versatile your tech stack, the stronger your impact. You possess a strong command of AI/ML algorithms, methodologies, tools, and the best practices for building robust AI/ML systems. Your foundational knowledge of data structures and algorithms is rock-solid. Skilled in full-stack development with a strong focus on test-first TDD practices and comprehensive unit testing across the entire application stack. A strong advantage: hands-on experience with AI/ML orchestration tools such as LangChain and MLflow, streamlining the training, evaluation, and deployment of AI/ML models. Demonstrates a strong interest in AI observability, particularly in monitoring and mitigating model drift to ensure sustained accuracy and reliability. Knowledge of containerization and orchestration tools like Docker and Kubernetes? That's a significant plus in our cloud-native world. Your analytical and problem-solving skills are sharp enough to cut through any challenge. Good grasp of designing complex systems, balancing scalability with simplicity in architecture and implementation You're a team player with experience in group software development and a fluent user of version control tools, especially Git. Your debugging skills are legendary – you can track down and squash bugs with finesse. You possess excellent oral and written communication skills, capable of producing clear and concise runbooks and technical documentation for both technical and non-technical audiences. Familiarity with relational database management systems (RDBMS) like PostgreSQL and MySQL is a plus. What You Bring: Passion and Product Thinking A Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field – your academic foundation is key. A genuine excitement for AI and large language models (LLMs) is a significant advantage – you'll be working at the cutting edge! Experience with Analytics? That's a huge plus in our data-driven environment. Familiarity with RDBMS – PostgreSQL, MySQL etc. – understanding data is crucial. You thrive in ambiguity, tackling undefined problems with an abstract and innovative mindset. Experience driving product vision to deliver long-term value for our customers is highly valued. You're ready to own the entire development lifecycle – from initial requirements to deployment and ongoing support. You're knowledgeable about open-source tools and technologies and know how to leverage and extend them to build innovative solutions. Passion for AI/ML, especially in building smart, agent-driven interfaces that feel human. Ownership mindset — you build, deploy, iterate, and scale with a long-term view. Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status.
Posted 2 weeks ago
20.0 years
0 Lacs
India
Remote
Company Description Svitla Systems, Inc. is a global digital solutions company with over 20 years of experience, crafting more than 5,000 transformative solutions for clients worldwide. Our mission is to leverage digital, cloud, data, and intelligent technologies to create sustainable solutions for our clients, enhancing their growth and competitive edge. With a diverse team of over 1,000 technology professionals, Svitla serves a range of clients from innovative startups to Fortune 500 companies across 20+ industries. Svitla operates from 10 delivery centers globally, specializing in areas like cloud migration, data analytics, web and mobile development, and more. We are proud to be a WBENC-certified business and one of the largest, fastest-growing women-owned IT companies in the US. Role Description This is a fully remote, full-time, long-term contractual position with one of our clients who is building the next generation of secure, real-time proctoring solutions for high-stakes exams. We’re looking for a Senior ML/AI Engineer to architect, implement, and maintain Azure-based AI models that power speech-to-text, computer vision, identity verification, and intelligent chat features during exam sessions. Responsibilities - Implement real-time speech-to-text transcription and audio-quality analysis using Azure AI Speech. - Build prohibited-item detection, OCR, and face-analysis pipelines with Azure AI Vision. - Integrate Azure Bot Service for rule-based, intelligent chat support. - Collaborate with our DevOps Engineer on CI/CD and infrastructure-as-code for AI model deployment. - Train, evaluate, and deploy object-detection models (e.g., screen-reflection, background faces, ID checks) using Azure Custom Vision. - Develop and maintain skeletal-tracking models (OpenPose/MediaPipe) for gaze-anomaly detection. - Fine-tune Azure Face API for ID-to-headshot matching at session start and continuous identity validation. - Expose inference results via REST APIs in partnership with backend developers to drive real-time proctor dashboards and post-session reports. - Monitor model performance, manage versioning/retraining workflows, and optimize accuracy for edge-case scenarios. Qualifications - Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related field. - 5+ years of professional ML/AI experience, with at least 2 years working on production-grade Azure Cognitive Services. - Strong Python skills plus 3+ years with TensorFlow or PyTorch. - Hands-on experience (1–2 years) with: - Azure AI Speech (speech-to-text, audio analysis) - Azure AI Vision (object detection, OCR, face analysis) - Azure Custom Vision model training and deployment - Azure Face API fine-tuning and biometric matching - Azure Bot Service integration - Solid understanding of CI/CD practices and tools (Azure DevOps, Docker, Kubernetes) with 2+ years of collaboration on AI model deployments. - 2+ years building and consuming RESTful or gRPC APIs for AI inference. - Proven track record of monitoring and optimizing model performance in production. Good to haves - 1+ year with skeletal-tracking frameworks (OpenPose, MediaPipe). - Familiarity with Azure ML Studio, ML pipelines, or MLflow for model versioning and retraining. - Experience with edge-deployment frameworks (TensorFlow Lite, ONNX Runtime). - Background in security and compliance for biometric data (GDPR, PCI-DSS). - Azure AI Engineer Associate or Azure Data Scientist Associate certification. Additional Information -The role is a Fully Remote, Full-Time, Long-Term, Contractual position -The hiring process includes an initial screening by the recruitment team, an HR motivation interview, an internal tech screening, a client technical interview and finally the client management interview -The salary range for this position is 50-70 LPA (INR) -The position needs to be filled on priority, only candidates who have an official notice period (or remaining time on their notice period) of <=30 days will be screened
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You’ll Work On 1. Deep Learning & Computer Vision Train models for image classification: binary/multi-class using CNNs, EfficientNet, or custom backbones. Implement object detection using YOLOv5, Faster R-CNN, SSD; tune NMS and anchor boxes for medical contexts. Work with semantic segmentation models (UNet, DeepLabV3+) for region-level diagnostics (e.g., cell, lesion, or nucleus boundaries). Apply instance segmentation (e.g., Mask R-CNN) for microscopy image cell separation. Use super-resolution and denoising networks (SRCNN, Real-ESRGAN) to enhance low-quality inputs. Develop temporal comparison pipelines for changes across image sequences (e.g., disease progression). Leverage data augmentation libraries (Albumentations, imgaug) for low-data domains. 2. Vision-Language Models (VLMs) Fine-tune CLIP, BLIP, LLaVA, GPT-4V to generate explanations, labels, or descriptions from images. Build image captioning models (Show-Attend-Tell, Transformer-based) using paired datasets. Train or use VQA pipelines for image-question-answer triples. Align text and image embeddings with contrastive loss (InfoNCE), cosine similarity, or projection heads. Design prompt-based pipelines for zero-shot visual understanding. Evaluate using metrics like BLEU, CIDEr, SPICE, Recall@K, etc. 3. Model Training, Evaluation & Interpretation Use PyTorch (core), with support from HuggingFace, torchvision, timm, Lightning. Track model performance with TensorBoard, Weights & Biases, MLflow. Implement cross-validation, early stopping, LR schedulers, warm restarts. Visualize model internals using GradCAM, SHAP, Attention rollout, etc. Evaluate metrics: • Classification: Accuracy, ROC-AUC, F1 • Segmentation: IoU, Dice Coefficient • Detection: mAP • Captioning/VQA: BLEU, METEOR 4. Optimization & Deployment Convert models to ONNX, TorchScript, or TFLite for portable inference. Apply quantization-aware training, post-training quantization, and pruning. Optimize for low-power inference using TensorRT or OpenVINO. Build multi-threaded or asynchronous pipelines for batched inference. 5. Edge & Real-Time Systems Deploy models on Jetson Nano/Xavier, Coral TPU. Handle real-time camera inputs using OpenCV, GStreamer and apply streaming inference. Handle multiple camera/image feeds for simultaneous diagnostics. 6. Regulatory-Ready AI Development Maintain model lineage, performance logs, and validation trails for 21 CFR Part 11 and ISO 13485 readiness. Contribute to validation reports, IQ/OQ/PQ, and reproducibility documentation. Write SOPs and datasheets to support clinical validation of AI components. 7. DevOps, CI/CD & MLOps Use Azure Boards + DevOps Pipelines (YAML) to: Track sprints • Assign tasks • Maintain epics & user stories • Trigger auto-validation pipelines (lint, unit tests, inference validation) on code push • Integrate MLflow or custom logs for model lifecycle tracking. • Use GitHub Actions for cross-platform model validation across environments. 8. Bonus Skills (Preferred but Not Mandatory) Experience in microscopy or pathology data (TIFF, NDPI, DICOM formats). Knowledge of OCR + CV hybrid pipelines for slide/dataset annotation. Experience with streamlit, Gradio, or Flask for AI UX prototyping. Understanding of active learning or semi-supervised learning in low-label settings. Exposure to research publishing, IP filing, or open-source contributions. 9. Required Background 4–6 years in applied deep learning (post academia) Strong foundation in: Python + PyTorch CV workflows (classification, detection, segmentation) Transformer architectures & attention VLMs or multimodal learning Bachelor’s or Master’s degree in CS, AI, EE, Biomedical Engg, or related field 10. How to Apply Send the following to info@sciverse.co.in Subject: Application – AI Research Engineer (4–8 Yrs, CV + VLM) Include: • Your updated CV • GitHub / Portfolio • Short write-up on a model or pipeline you built and why you’re proud of it OR apply directly via LinkedIn — but email applications get faster visibility. Let’s build AI that sees, understands, and impacts lives.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As an AI Ops Expert, you will be responsible for the delivery of projects with defined quality standards within set timelines and budget constraints. Your role will involve managing the AI model lifecycle, versioning, and monitoring in production environments. You will be tasked with building resilient MLOps pipelines and ensuring adherence to governance standards. Additionally, you will design, implement, and oversee AIops solutions to automate and optimize AI/ML workflows. Collaboration with data scientists, engineers, and stakeholders will be essential to ensure seamless integration of AI/ML models into production systems. Monitoring and maintaining the health and performance of AI/ML systems, as well as developing and maintaining CI/CD pipelines for AI/ML models, will also be part of your responsibilities. Troubleshooting and resolving issues related to AI/ML infrastructure and workflows will require your expertise, along with staying updated on the latest AI Ops, MLOps, and Kubernetes tools and technologies. To be successful in this role, you must possess a Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of relevant experience. Your proven experience in AIops, MLOps, or related fields will be crucial. Proficiency in Python and hands-on experience with Fast API are required, as well as strong expertise in Docker and Kubernetes (or AKS). Familiarity with MS Azure and its AI/ML services, including Azure ML Flow, is essential. Additionally, you should be proficient in using DevContainer for development and have knowledge of CI/CD tools like Jenkins, Argo CD, Helm, GitHub Actions, or Azure DevOps. Experience with containerization and orchestration tools, Infrastructure as Code (Terraform or equivalent), strong problem-solving skills, and excellent communication and collaboration abilities are also necessary. Preferred skills for this role include experience with machine learning frameworks such as TensorFlow, PyTorch, or scikit-learn, as well as familiarity with data engineering tools like Apache Kafka, Apache Spark, or similar. Knowledge of monitoring and logging tools such as Prometheus, Grafana, or ELK stack, along with an understanding of data versioning tools like DVC or MLflow, would be advantageous. Proficiency in Azure-specific tools and services like Azure Machine Learning (Azure ML), Azure DevOps, Azure Kubernetes Service (AKS), Azure Functions, Azure Logic Apps, Azure Data Factory, Azure Monitor, and Application Insights is also preferred. Joining our team at Socit Gnrale will provide you with the opportunity to be part of a dynamic environment where your contributions can make a positive impact on the future. You will have the chance to innovate, collaborate, and grow in a supportive and stimulating setting. Our commitment to diversity and inclusion, as well as our focus on ESG principles and responsible practices, ensures that you will have the opportunity to contribute meaningfully to various initiatives and projects aimed at creating a better future for all. If you are looking to be directly involved, develop your expertise, and be part of a team that values collaboration and innovation, you will find a welcoming and fulfilling environment with us at Socit Gnrale.,
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior AI/ML Engineer, you will be responsible for designing, developing, and deploying advanced AI models with a focus on generative AI, including transformer architectures such as GPT, BERT, T5, and other deep learning models utilized for text, image, or multimodal generation. You will work with extensive and complex datasets by cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Additionally, you will collaborate with cross-functional teams to identify project objectives and create solutions using generative AI tailored to business needs. Your role will involve implementing, fine-tuning, and scaling generative AI models in production environments to ensure robust model performance and efficient resource utilization. You will also be responsible for developing pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. It is essential to stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Furthermore, documenting and communicating technical specifications, algorithms, and project outcomes to technical and non-technical stakeholders with an emphasis on explainability and responsible AI practices. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Science, AI/ML, or a related field. A relevant Ph.D. or research experience in generative AI would be advantageous. You should have 12-16 years of experience in machine learning, with at least 8 years in designing and implementing generative AI models or working specifically with transformer-based models. The ideal candidate will possess expertise in generative AI, transformer models, GANs, VAEs, text generation, and image generation. Strong knowledge of machine learning algorithms, deep learning, neural networks, and programming skills in Python and SQL are required. Familiarity with libraries such as Hugging Face Transformers, PyTorch, TensorFlow, and experience with MLOps tools like Docker, Kubernetes, MLflow, and Cloud Platforms (AWS, GCP, Azure) is essential. Additionally, proficiency in data engineering concepts such as data preprocessing, feature engineering, and data cleaning is preferred. Joining our team will provide you with an opportunity to work on technical challenges with global impact, vast opportunities for self-development through online university access and sponsored certifications, sponsored Tech Talks & Hackathons, and a generous benefits package including health insurance, retirement benefits, flexible work hours, and more. You will work in a supportive environment with forums to explore passions beyond work, offering an exciting opportunity to contribute to cutting-edge solutions while advancing your career in a dynamic and collaborative setting.,
Posted 2 weeks ago
58.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. Responsibilities This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Responsibilities : Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle: data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58 years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc. Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech)
Posted 2 weeks ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Location : Preferred: Ahmedabad, Gandhinagar, Hybrid (Can consider Remote case to case). Experience : 8+ Years (with hands-on AI/ML architecture experience). Education : Ph. or Master's in Computer Science, Data Science, Artificial Intelligence, or related fields. Job Summary We are seeking an experienced AI/ML Architect with a strong academic background and industry experience to lead the design and implementation of AI/ML solutions across diverse industry domains. The ideal candidate will act as a trusted advisor to clients, understanding their business problems, and crafting scalable AI/ML strategies and solutions aligned to their vision. Key Responsibilities Engage with enterprise customers and stakeholders to gather business requirements, problem statements, and aspirations. Translate business challenges into scalable and effective AI/ML-driven solutions and architectures. Develop AI/ML adoption strategies tailored to customer maturity, use cases, and ROI potential. Design end-to-end ML pipelines and architecture (data ingestion, processing, model training, deployment, and monitoring). Collaborate with data engineers, scientists, and business SMEs to build and operationalize AI/ML solutions. Present technical and strategic insights to both technical and non-technical audiences, including executives. Lead POCs, pilots, and full-scale implementations. Stay updated on the latest research, technologies, tools, and trends in AI/ML and integrate them into customer solutions. Contribute to proposal development, technical documentation, and pre-sales engagements. Required Qualifications 8+ years of experience in the AI/ML field, with a strong background in solution architecture. Deep knowledge of machine learning algorithms, NLP, computer vision, deep learning frameworks (TensorFlow, PyTorch, etc.) Experience with cloud AI/ML services (AWS SageMaker, Azure ML, GCP Vertex AI, etc.) Strong communication and stakeholder management skills. Proven track record of working directly with clients to understand business needs and deliver AI solutions. Familiarity with MLOps practices and tools (Kubeflow, MLflow, Airflow, etc. Preferred Skills Experience in building GenAI or Agentic AI applications. Knowledge of data governance, ethics in AI, and explainable AI. Ability to lead cross-functional teams and mentor junior data scientists/engineers. Publications or contributions to AI/ML research communities (preferred but not mandatory). (ref:hirist.tech)
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Machine Learning Engineer at our company, you will be utilizing your expertise in Computer Vision, Natural Language Processing (NLP), and Backend Development. Your primary responsibilities will include developing ML/DL models for Computer Vision tasks such as classification, object detection, and segmentation, as well as for NLP tasks like text classification, Named Entity Recognition (NER), and summarization. You will also be implementing research papers and creating production-ready prototypes. To excel in this role, you must have a solid understanding of Machine Learning and Deep Learning concepts. Proficiency in tools and libraries like PyTorch, OpenCV, Pillow, TorchVision, and Transformers is essential. You will be optimizing models using techniques such as quantization, pruning, ONNX export, and TorchScript. Moreover, you will be tasked with building and deploying RESTful APIs using FastAPI, Flask, or Django, and containerizing applications using Docker for deployment on cloud or local servers. Your role will also involve writing clean, efficient, and scalable code for backend and ML pipelines. Strong backend skills using FastAPI, Flask, or Django are required, along with experience in utilizing NLP libraries like Hugging Face Transformers, spaCy, and NLTK. Familiarity with Docker, Git, and Linux environments is crucial for this position, as well as experience with model deployment and optimization tools such as ONNX and TorchScript. While not mandatory, it would be advantageous to have knowledge of Generative AI / Large Language Models (LLMs) and experience with MLOps tools like MLflow, DVC, and Airflow. Additionally, familiarity with cloud platforms such as AWS, GCP, or Azure would be a plus. The ideal candidate should hold a Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. This is a full-time position with an evening shift schedule from Monday to Friday. The work location is in person.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be responsible for developing machine learning models by designing, building, and evaluating both supervised and unsupervised models like regression, classification, clustering, and recommendation systems. This includes performing feature engineering, model tuning, and validation through cross-validation and various performance metrics. Your duties will also involve preparing and analyzing data by cleaning, preprocessing, and transforming large datasets sourced from different channels. You will conduct exploratory data analysis (EDA) to reveal patterns and gain insights from the data. In addition, you will deploy machine learning models into production using tools like Flask, FastAPI, or cloud-native services. Monitoring model performance and updating or retraining models as necessary will also fall under your purview. Collaboration and communication are key aspects of this role as you will collaborate closely with data engineers, product managers, and business stakeholders to comprehend requirements and provide impactful solutions. Furthermore, presenting findings and model outcomes in a clear and actionable manner will be essential. You will utilize Python and libraries such as scikit-learn, XGBoost, TensorFlow, or PyTorch. Additionally, leveraging version control tools like Git, Jupyter notebooks, and ML lifecycle tools such as MLflow and DVC will be part of your daily tasks. The ideal candidate should possess a Bachelor's or Master's degree in computer science, Data Science, Statistics, or a related field. A minimum of 2-3 years of experience in building and deploying machine learning models is preferred. Strong programming skills in Python and familiarity with SQL are required. A solid grasp of ML concepts, model evaluation, statistical techniques, exposure to cloud platforms (AWS, GCP, or Azure), and MLOps practices would be advantageous. Excellent problem-solving and communication skills are also essential for this role.,
Posted 2 weeks ago
3.0 - 7.0 years
0 - 0 Lacs
karnataka
On-site
As a Data Science/Machine Learning Instructor at our organization, you will be responsible for conducting live instructions to educate working professionals on core topics related to Data Science and Machine Learning. This includes topics such as supervised and unsupervised learning, deep learning, model evaluation, and MLOps through engaging and interactive sessions. You will collaborate with content teams to develop labs, code walkthroughs, and real-world case studies using Python, scikit-learn, TensorFlow/PyTorch, and cloud-based Data Science/Machine Learning services. Your role will also involve providing support to learners by addressing technical queries, debugging code, reviewing notebooks, and offering constructive feedback on assignments and projects. Additionally, you will mentor learners on capstone projects such as image/video models, NLP pipelines, recommendation systems, and deployment pipelines. Continuous improvement is a key aspect of your responsibilities, which includes analyzing learner performance data to enhance modules, introduce new topics like transformers and generative models, and improve assessments. To qualify for this position, you should have a minimum of 3 years of industry or academic experience in Data Science, Machine Learning, or Artificial Intelligence. A Master's degree in Data Science, Machine Learning, Artificial Intelligence, or a Computer Science specialization in AI and Data Science/Machine Learning is required. Proficiency in Python and ML frameworks such as scikit-learn, TensorFlow, or PyTorch is essential. Familiarity with MLOps tools like Docker, Kubernetes, MLflow, and cloud ML services such as AWS SageMaker, GCP AI Platform, or Azure ML is preferred. Strong presentation and mentoring skills in live and small-group settings are necessary for this role. Prior experience in teaching or educational technology would be advantageous. If you are passionate about Data Science and Machine Learning and possess the required qualifications and skills, we invite you to join our team as a Data Science/Machine Learning Instructor.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
Job Description: As the Digital Transformation Lead at Godrej Agrovet Limited (GAVL) in Mumbai, you will play a crucial role in driving innovation and productivity in the agri-business sector. GAVL is dedicated to enhancing the livelihood of Indian farmers by developing sustainable solutions that enhance crop and livestock yields. With leading market positions in Animal Feed, Crop Protection, Oil Palm, Dairy, Poultry, and Processed Foods, GAVL is committed to making a positive impact on the agricultural industry. With an impressive annual sales figure of 6000 Crore INR in FY 18-19, GAVL has a widespread presence across India, offering high-quality feed and nutrition products for cattle, poultry, aqua feed, and specialty feed. The company operates 50 manufacturing facilities, has a network of 10,000 rural distributors/dealers, and employs over 2500 individuals. At GAVL, our people philosophy revolves around the concept of tough love. We set high expectations for our team members, recognizing and rewarding performance and potential through career growth opportunities. We prioritize the development, mentoring, and training of our employees, understanding that diverse interests and passions contribute to a strong team dynamic. We encourage individuals to explore their full potential and provide a supportive environment for personal and professional growth. In this role, you will utilize your expertise as a Data Scientist to extract insights from complex datasets, develop predictive models, and drive data-driven decisions across the organization. You will collaborate with various teams, including business, engineering, and product, to apply advanced statistical methods, machine learning techniques, and domain knowledge to address real-world challenges. Key Responsibilities: - Data Cleaning, Preprocessing & Exploration: Prepare and analyze data, ensuring quality and completeness by addressing missing values, outliers, and data transformations to identify patterns and anomalies. - Machine Learning Model Development: Build, train, and deploy machine learning models using tools like MLflow on the Databricks platform, exploring regression, classification, clustering, and time series analysis techniques. - Model Evaluation & Deployment: Enhance model performance through feature selection, leveraging distributed computing capabilities for efficient processing, and utilizing CI/CD tools for deployment automation. - Collaboration: Work closely with data engineers, analysts, and stakeholders to understand business requirements and translate them into data-driven solutions. - Data Visualization and Reporting: Create visualizations and dashboards to communicate insights to technical and non-technical audiences using tools like Databricks and Power BI. - Continuous Learning: Stay updated on the latest advancements in data science, machine learning, and industry best practices to enhance skills and processes. Required Technical Skills: - Proficiency in statistical analysis, hypothesis testing, and machine learning techniques. - Familiarity with NLP, time series analysis, computer vision, and A/B testing. - Strong knowledge of Databricks, Spark DataFrames, MLlib, and programming languages (Python, TensorFlow, Pandas, scikit-learn, PySpark, NumPy). - Proficient in SQL for data extraction, manipulation, and analysis, along with experience in MLflow and cloud data storage tools. Qualifications: - Education: Bachelors degree in Statistics, Mathematics, Computer Science, or a related field. - Experience: Minimum of 3 years in a data science or analytical role. Join us at Vikhroli, Mumbai, and be a part of our mission to drive digital transformation and innovation in the agricultural sector at Godrej Agrovet Limited.,
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction To Role Are you ready to be part of the future of healthcare? Can you think big, be bold, and harness the power of digital and AI to tackle longstanding life sciences challenges? Then Evinova, a new healthtech business within the AstraZeneca Group, might be for you! Transform billions of patients’ lives through technology, data, and cutting-edge ways of working. You’re disruptive, decisive, and transformative—someone who’s excited to use technology to improve patients’ health. We’re building Evinova, a fully-owned subsidiary of AstraZeneca Group, to deliver market-leading digital health solutions that are science-based, evidence-led, and human experience-driven. Smart risks and quick decisions come together to accelerate innovation across the life sciences sector. Be part of a diverse team that pushes the boundaries of science by digitally empowering a deeper understanding of the patients we’re helping. Launch game-changing digital solutions that improve the patient experience and deliver better health outcomes. Together, we have the opportunity to combine deep scientific expertise with digital and artificial intelligence to serve the wider healthcare community and create new standards across the sector. Accountabilities The Machine Learning and Artificial Intelligence Operations team (ML/AI Ops) is newly formed to spearhead the design, creation, and operational excellence of our entire ML/AI data and computational AWS ecosystem to catalyze and accelerate science-led innovations. This team is responsible for the design, implementation, deployment, health, and performance of all algorithms, models, ML/AI operations (MLOps, AIOps, and LLMOps), and Data Science Platform. We manage ML/AI and broader cloud resources, automating operations through infrastructure-as-code and CI/CD pipelines, ensuring best-in-class operations—striving to push beyond mere compliance with industry standards such as Good Clinical Practices (GCP) and Good Machine Learning Practice (GMLP). As a ML/AI Operations Engineer for clinical trial design, planning, and operational optimization on our team, you will lead the development and management of MLOps systems for our trial management and optimization SaaS product. You will collaborate closely with data scientists to transition projects from embryonic research into production-grade AI capabilities, utilizing advanced tools and frameworks to optimize model deployment, governance, and infrastructure performance. This position requires a deep understanding of cloud-native ML/AI Ops methodologies and technologies, AWS infrastructure, and the unique demands of regulated industries, making it a cornerstone of our success in delivering impactful solutions to the pharmaceutical industry. Role & Team Key Responsibilities Operational Excellence Lead by example in creating high-performance, mission-focused and interdisciplinary teams/culture founded on trust, mutual respect, growth mindsets, and an obsession for building extraordinary products with extraordinary people. Drive the creation of proactive capability and process enhancements that ensures enduring value creation and analytic compounding interest. Design and implement resilient cloud ML/AI operational capabilities to maximize our system A-bilities (Learnability, Flexibility, Extendibility, Interoperability, Scalability). Drive precision and systemic cost efficiency, optimized system performance, and risk mitigation with a data-driven strategy, comprehensive analytics, and predictive capabilities at the tree-and-forest level of our ML/AI systems, workloads and processes. ML/AI Cloud Operations and Engineering Develop and manage MLOps/AIOps/LLMOps systems for clinical trial design, planning and operational optimization. Partner closely with data scientists to shepherd projects from embryonic research stages into production-grade ML/AI capabilities. Leverage and teach modern tools, libraries, frameworks and best practices to design, validate, deploy and monitor data pipelines and models in production (examples include, but are not limited to AWS Sagemaker, MLflow, CML, Airflow, DVC, Weights and Biases, FastAPI, Litserve, Deepchecks, Evidently, Fiddler, Manifold). Establish systems and protocols for entire model development lifecycle across a diverse set of algorithms, conventional statistical models, ML and AI/GenAI models to ensure best-in-class Machine Learning Practice (MLP). Enhance system scalability, reliability, and performance through effective infrastructure and process management. Ensure that any prediction we make is backed by deep exploratory data analysis and evidence, interpretable, explainable, safe, and actionable. Personal Attributes Customer-obsessed and passionate about building products that solve real-world problems. Highly organized and detail-oriented, with the ability to manage multiple initiatives and deadlines. Collaborative and inclusive, fostering a positive team culture where creativity and innovation thrive. Essential Skills/Experience Deep understanding of the Data Science Lifecycle (DSLC) and the ability to shepherd data science projects from inception to production within the platform architecture. Expert in MLflow, SageMaker, Kubeflow or Argo, DVC, Weights and Biases, and other relevant platforms. Strong software engineering abilities in Python/JavaScript/TypeScript. Expert in AWS services and containerization technologies like Docker and Kubernetes. Experience with LLMOps frameworks such as LlamaIndex and LangChain. Ability to collaborate effectively with engineering, design, product, and science teams. Strong written and verbal communication skills for reporting and documentation. Minimum of 4 years in ML/AI operations engineering roles. Proven track record of deploying algorithms and machine learning models into production environments. Demonstrated ability to work closely with cross-functional teams, particularly data scientists. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is where creativity meets critical thinking! We embrace technology to reimagine healthcare's future by predicting, preventing, and treating conditions more effectively. Our inclusive approach fosters collaboration internally and externally to share diverse perspectives. We empower our teams with trust and space to explore innovative solutions that redefine patient experiences across their journey. Join us as we drive change that benefits both business and patients. Ready to make an impact? Apply now to join our journey towards transforming healthcare! Date Posted 18-Jul-2025 Closing Date 31-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 2 weeks ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary MLOps Engineering manager Horizontal Data Science Enablement Team within SSO Data Science is looking for a MLOps Engineering Manager who can help solve MLOps problems, manage the Databricks platform for the entire organization, build CI/CD or automation pipelines, and lead best practices. All About You Oversee the administration, configuration, and maintenance of Databricks clusters and workspaces. Continuously monitor Databricks clusters for high workloads or excessive usage costs, and promptly alert relevant stakeholders to address issues impacting overall cluster health. Implement and manage security protocols, including access controls and data encryption, to safeguard sensitive information in adherence with Mastercard standards. Facilitate the integration of various data sources into Databricks, ensuring seamless data flow and consistency. Identify and resolve issues related to Databricks infrastructure, providing timely support to users and stakeholders. Work closely with data engineers, data scientists, and other stakeholders to support their data processing and analytics needs. Maintain comprehensive documentation of Databricks configurations, processes, and best practices and lead participation in security and architecture reviews of the infrastructure Bring MLOps expertise to the table, namely within the scope of, but not limited to: Model monitoring Feature catalog/store Model lineage maintenance CI/CD pipelines to gatekeep model lifecycle from development to production Own and maintain MLOps solutions either by leveraging open-sourced solutions or with a 3rd party vendor Build LLMOps pipelines using open-source solutions. Recommend alternatives and onboard products to the solution Maintain services once they are live by measuring and monitoring availability, latency and overall system health. What Experience You Need Master’s degree in computer science, software engineering, or a similar field. Strong experience with Databricks and its management of roles and resources Experience in cloud technologies and operations Experience supporting API’s and Cloud technologies Experience with MLOps solutions like MLFlow Experience with performing data analysis, data observability, data ingestion and data integration. 5+ DevOps, SRE, or general systems engineering experience. 2+ years of hands-on experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory, and Chef. Experience architecting and implementing data governance processes and tooling (such as data catalogs, lineage tools, role-based access control, PII handling) Strong coding ability in Python or other languages like Java, and C++, plus a solid grasp of SQL fundamentals Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive. What could set you apart SQL tuning experience. Strong automation experience Strong Data Observability experience. Operations experience in supporting highly scalable systems. Ability to operate in a 24x7 environment encompassing global time zones Self-Motivating and creatively solves software problems and effectively keep the lights on for modeling systems. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-247976
Posted 2 weeks ago
1.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Location: Vadodara Type: Full-time / Internship Duration (for interns): Minimum 3 months Stipend/CTC: Based on experience and role About Gururo Gururo is an industry leader in practical, career-transforming education. With a mission to empower professionals and students through real-world skills, we specialize in project management, leadership development, and emerging technologies. Join our fast-paced, mission-driven team and work on AI/ML-powered platforms that impact thousands globally. Who Can Apply? Interns: Final-year students or recent graduates from Computer Science, Data Science, or related fields, with a strong passion for AI/ML. Freshers: 0–1 years of experience with academic or internship exposure to machine learning projects. Experienced Professionals: 1+ years of hands-on experience in AI/ML roles with a demonstrated portfolio or GitHub contributions. Key Responsibilities Design and develop machine learning models and AI systems for real-world applications Clean, preprocess, and analyze large datasets using Python and relevant libraries Build and deploy ML pipelines using tools like Scikit-learn, TensorFlow, PyTorch Work on NLP, Computer Vision, or Recommendation Systems based on project needs Evaluate models with appropriate metrics and fine-tune for performance Collaborate with product, engineering, and design teams to integrate AI into platforms Maintain documentation, participate in model audits, and ensure ethical AI practices Use version control (Git), cloud deployment (AWS, GCP), and experiment tracking tools (MLflow, Weights & Biases) Must-Have Skills Strong Python programming skills Hands-on experience with one or more ML frameworks (Scikit-learn, TensorFlow, or PyTorch) Good understanding of core ML algorithms (classification, regression, clustering, etc.) Familiarity with data wrangling libraries (Pandas, NumPy) and visualization (Matplotlib, Seaborn) Experience working with Jupyter Notebooks and version control (Git) Basic understanding of model evaluation techniques and metrics Good to Have (Optional) Exposure to deep learning, NLP (transformers, BERT), or computer vision (OpenCV, CNNs) Experience with cloud ML platforms (AWS SageMaker, GCP AI Platform, etc.) Familiarity with Docker, APIs, and ML model deployment workflows Knowledge of MLOps tools and CI/CD for AI systems Kaggle profile, published papers, or open-source contributions What You’ll Gain Work on real-world AI/ML problems in the fast-growing EdTech space Learn from senior data scientists and engineers in a mentorship-driven environment Certificate of Internship/Experience & Letter of Recommendation (for interns) Opportunities to lead research-driven AI initiatives at scale Flexible work hours and performance-based growth opportunities End-to-end exposure — from data collection to model deployment in production
Posted 2 weeks ago
0 years
2 - 4 Lacs
Hyderābād
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Key Responsibilities Develop, deploy, and monitor machine learning models in production environments. Automate ML pipelines for model training, validation, and deployment. Optimize ML model performance, scalability, and cost efficiency. Implement CI/CD workflows for ML model versioning, testing, and deployment. Manage and optimize data processing workflows for structured and unstructured data. Design, build, and maintain scalable ML infrastructure on cloud platforms. Implement monitoring, logging, and alerting solutions for model performance tracking. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into business applications. Ensure compliance with best practices for security, data privacy, and governance. Stay updated with the latest trends in MLOps, AI, and cloud technologies. Mandatory Skills Technical Skills: Programming Languages: Proficiency in Python (3.x) and SQL. ML Frameworks & Libraries: Extensive knowledge of ML frameworks (TensorFlow, PyTorch, Scikit-learn), data structures, data modeling, and software architecture. Databases: Experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Mathematics & Algorithms: Strong understanding of mathematics, statistics, and algorithms for machine learning applications. ML Modules & REST API: Experience in developing and integrating ML modules with RESTful APIs. Version Control: Hands-on experience with Git and best practices for version control. Model Deployment & Monitoring: Experience in deploying and monitoring ML models using:MLflow (for model tracking, versioning, and deployment) WhyLabs (for model monitoring and data drift detection) Kubeflow (for orchestrating ML workflows) Airflow (for managing ML pipelines) Docker & Kubernetes (for containerization and orchestration) Prometheus & Grafana (for logging and real-time monitoring) Data Processing: Ability to process and transform unstructured data into meaningful insights (e.g., auto-tagging images, text-to-speech conversions). Preferred Cloud & Infrastructure Skills: Experience with cloud platforms : Knowledge of AWS Lambda, AWS API Gateway, AWS Glue, Athena, S3 and Iceberg and Azure AI Studio for model hosting, GPU/TPU usage, and scalable infrastructure. Hands-on with Infrastructure as Code (Terraform, CloudFormation) for cloud automation. Experience on CI/CD pipelines: Experience integrating ML models into continuous integration/continuous delivery workflows. We use Git based CI/CD methods mostly. Experience with feature stores (Feast, Tecton) for managing ML features. Knowledge of big data processing tools (Spark, Hadoop, Dask, Apache Beam). EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: Job Title: Senior Data Engineer – Machine Learning & Data Engineering Location: Gurgaon [IND] Department: Data Engineering / Data Science Employment Type: Full-Time YoE: 5-10 About the Role: We are looking for a Senior Data Engineer with a strong background in machine learning infrastructure , data pipeline development , and collaboration with data scientists to drive the deployment and scalability of advanced analytics and AI solutions. You will play a pivotal role in building and optimizing data systems that power ML models, dashboards, and strategic insights across the company. Key Responsibilities: Design, develop, and optimize scalable data pipelines and ETL/ELT processes to support ML workflows and analytics. Collaborate with data scientists to operationalize machine learning models in production environments (batch, real-time). Build and maintain data lakes, data warehouses, and feature stores using modern cloud technologies (e.g., AWS/GCP/Azure, Snowflake, Databricks). Implement and maintain ML infrastructure, including model versioning, CI/CD for ML, and monitoring tools (MLflow, Airflow, Kubeflow, etc.). Develop and enforce data quality, governance, and security standards. Troubleshoot data issues and support the lifecycle of model development to deployment. Partner with software engineers and DevOps teams to ensure data systems are robust, scalable, and secure. Mentor junior engineers and provide technical leadership on data and ML infrastructure. Qualifications: Required: 5+ years of experience in data engineering, ML infrastructure, or a related field. Proficient in Python, SQL, and big data processing frameworks (Spark, Flink, or similar). Experience with orchestration tools like Apache Airflow, Prefect, or Luigi. Hands-on experience deploying and managing machine learning models in production. Deep knowledge of cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes). Familiarity with CI/CD tools for data and ML pipelines. Experience with version control, testing, and reproducibility in data workflows. Preferred: Experience with feature stores (e.g., Feast), ML experiment tracking (e.g., MLflow), and monitoring solutions. Background in supporting NLP, computer vision, or time-series ML models. Strong communication skills and ability to work cross-functionally with data scientists, analysts, and engineers. Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
kolkata, west bengal
On-site
As a Senior Product Manager, you will play a pivotal role in defining the strategic direction of the product offerings, collaborating with cross-functional teams, and ensuring the successful execution of the product vision. With 8-10 years of experience in the Product domain developing AI-based enterprise solutions, you will leverage AI/ML technologies to enhance product capabilities and deliver innovative solutions in a fast-paced and dynamic environment. Your key roles and responsibilities will include leading the development and execution of AI/ML products from concept to launch, defining the product roadmap, strategy, and vision based on market trends, customer needs, and business goals. You will collaborate with cross-functional teams including engineering, design, data science, and marketing to drive product development, conduct market research and competitive analysis to identify new opportunities, and enhance existing products. Managing the product lifecycle, including planning, prioritization, and feature definition, you will measure and evaluate product performances, user behavior, and Customer Satisfaction using qualitative and quantitative methods. Communication of the product vision and priorities to stakeholders, team members, and executives will be essential. Additionally, driving product performance analysis and making data-driven decisions to optimize product features and user experience are crucial aspects of the role. Your interaction with customers to gather feedback, understand requirements, and address product issues will be important. Staying updated on AI industry trends, technologies, and best practices to drive innovation and maintain competitive advantage is a key requirement. Providing leadership and mentorship to junior product team members will also be part of your responsibilities. A Product Management Certification (e.g., Pragmatic Marketing, Certified Scrum Product Owner) is required for this role. Strong behavioral skills such as excellent leadership and communication, problem-solving, decision-making abilities, adaptability, creativity, and a passion for innovation are essential. Experience with technical tools and frameworks like Python, TensorFlow, PyTorch, OpenAI, Jupyter Notebooks, MLflow, AWS SageMaker, Google Vertex AI, or Azure ML will be advantageous. If you have proven experience in building AI/ML Products, you can email your resumes to hardik.dwivedi@adani.com.,
Posted 2 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description : Principal AI Architect Employment Type : Full-Time. Relevant Experience : 10+ years Key Responsibilities AI-First Leadership : Define and drive DB Techs AI vision, re-architect systems into AI-native services, integrate tools like Cursor/Relevance AI, and mentor teams in prompt engineering, Vibe Coding, and autonomous testing. Architect Scalable AI Systems : Design enterprise-scale AI/ML platforms that support real-time analytics, model deployment, and continuous learning in financial products and services. Lead Solution Design : Collaborate with data scientists, engineers, and business stakeholders to build and integrate AI models into core platforms (e.g., risk engines, transaction monitoring, robo-advisors). Ensure Governance & Compliance : Implement AI systems that meet financial regulations (e.g., GDPR, PCI-DSS, FFIEC, Basel III) and uphold fairness, explainability, and accountability. Drive MLOps Strategy : Establish and maintain robust pipelines for data ingestion, feature engineering, model training, testing, deployment, and monitoring. Team Leadership : Provide technical leadership to data science and engineering teams. Promote best practices in AI ethics, version control, and reproducibility. Identify areas where AI can deliver business value and lead the development of proofs-of-concept (PoCs). Evaluate the feasibility, cost, and impact of new AI initiatives. Define best practices\standards for model lifecycle management (training, validation, deployment, monitoring) - Evaluate Emerging Technologies : Stay ahead of developments in generative AI, LLMs, and FinTech- specific AI tools, and drive their strategic adoption. Technical Skills And Tools ML & AI Frameworks : Scikit-learn, XGBoost, LightGBM, TensorFlow, PyTorch Hugging Face Transformers, OpenAI APIs (for generative and NLP use cases) MLOps & Deployment MLflow, Kubeflow, Seldon Core, KServe, Weights & Biases FastAPI, gRPC, Docker, Kubernetes, Airflow FinTech-Specific Applications Credit scoring models, Fraud detection algorithms Time series forecasting, NLP for financial documents/chatbots Algorithmic trading models, AML (Anti-Money Laundering) systems Cloud & Data Platforms AWS SageMaker, Azure ML, Google Vertex AI Databricks, Snowflake, Kafka, BigQuery, Delta Lake Monitoring & Explainability SHAP, LIME, Alibi, Evidently AI, Arize AI IBM AIX 360, Fiddler AI Required Qualifications Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field; PhD is a plus. 10+ years of experience in AI/ML, including 35 years in architectural roles within FinTech or other highly regulated industries. Proven track record of building and deploying AI solutions in areas such as fraud detection, credit risk modeling, or portfolio optimization. Strong Hands-on Expertise In Machine Learning : scikit-learn, XGBoost, LightGBM, TensorFlow, PyTorch Data Engineering : Spark, Kafka, Airflow, SQL/NoSQL (MongoDB, Neo4j) Cloud & MLOps : AWS, GCP, or Azure; Docker, Kubernetes, MLflow, SageMaker, Vertex AI Programming : Python (primary); Java or Scala (optional) Solid software engineering background with experience integrating ML models into scalable production systems. Excellent communication skills with the ability to influence both technical and non-technical stakeholders. (ref:hirist.tech)
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The QA Engineer AI/ML will play a crucial role in evaluating and conducting manual functional, regression, and integrated testing on new or modified software programs. Your primary responsibility will be to ensure that these programs meet the specified requirements and adhere to established guidelines. By writing, revising, and validating test plans and procedures, you will contribute to the identification of defects, environmental needs, and product feature evaluations. As a QA Engineer AI/ML, you will be tasked with maintaining a comprehensive test library, executing test scenarios to ensure coverage of requirements and regression, and conducting both positive and negative testing. Your involvement in product design reviews will provide valuable insights into functional requirements, product designs, usability considerations, and testing implications. Additionally, you will be responsible for identifying, reporting, and tracking product defects, as well as addressing the need for additional product functionality in a clear and concise manner. In this role, you will also be required to prepare and review technical documentation for accuracy, completeness, and overall quality. Providing task estimations and ensuring timely delivery against specified schedules will be essential aspects of your responsibilities. Furthermore, your availability outside of standard business hours may be necessary as part of a rotational on-call schedule. To qualify for this position, you must hold a Bachelor's degree in computer science, engineering, or a related field, or possess equivalent relevant experience. You should have 2-4 years of experience in QA Manual and demonstrate a strong understanding of technology, software quality assurance standards, and practices. Your exceptional written and verbal communication skills, along with active listening abilities, will enable you to interact effectively with a diverse range of technical and non-technical personnel. A proficiency in identifying issues logically, troubleshooting, problem-solving, and predicting defects will be crucial for success in this role. You should be adept at interpreting business requirements, creating test specifications, test plans, and test scenarios. Additionally, a balance between individual and team efforts in collaborative processes while meeting deadlines will be necessary. Experience and skills in AI/ML fundamentals, data quality assessment, model performance testing, AI/ML testing tools familiarity, bias and fairness testing, and programming skills for AI/ML testing are highly desirable. Proficiency in Python and SQL for creating test scripts, data manipulation, and working with ML libraries will be beneficial. Familiarity with MLOps, model lifecycle testing, structured delivery processes, Agile methods, and various platforms and databases will also be advantageous. Enjoy a competitive compensation package and comprehensive benefits as part of this role at Netsmart, an Equal Opportunity Employer dedicated to diversity and inclusivity.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France