Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.
Posted 4 weeks ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role name: Automation Test Lead (AI/ML) Years of exp: 5 - 8 yrs About Dailoqa Dailoqa’s mission is to bridge human expertise and artificial intelligence to solve the challenges facing financial services. Our founding team of 20+ international leaders, including former CIOs and senior industry experts, combines extensive technical expertise with decades of real-world experience to create tailored solutions that harness the power of combined intelligence. With a focus on Financial Services clients, we have deep expertise across Risk & Regulations, Retail & Institutional Banking, Capital Markets, and Wealth & Asset Management. Dailoqa has global reach in UK, Europe, Africa, India, ASEAN, and Australia. We integrate AI into business strategies to deliver tangible outcomes and set new standards for the financial services industry. Working at Dailoqa will be hard work, our environment is fluid and fast-moving and you'll be part of a community that values innovation, collaboration, and relentless curiosity. We’re looking at people who: Are proactive, curious adaptable, and patient Shape the company's vision and will have a direct impact on its success. Have the opportunity for fast career growth. Have the opportunity to participate in the upside of an ultra-growth venture. Have fun 🙂 Don’t apply if: You want to work on a single layer of the application. You prefer to work on well-defined problems. You need clear, pre-defined processes. You prefer a relaxed and slow paced environment. Role Overview As an Automation Test Lead at Dailoqa, you’ll architect and implement robust testing frameworks for both software and AI/ML systems. You’ll bridge the gap between traditional QA and AI-specific validation, ensuring seamless integration of automated testing into CI/CD pipelines while addressing unique challenges like model accuracy, GenAI output validation, and ethical AI compliance. Key Responsibilities Test Automation Strategy & Framework Design Design and implement scalable test automation frameworks for frontend (UI/UX), backend APIs, and AI/ML model-serving endpoints using tools like Selenium, Playwright, Postman, or custom Python/Java solutions. Build GenAI-specific test suites for validating prompt outputs, LLM-based chat interfaces, RAG systems, and vector search accuracy. Develop performance testing strategies for AI pipelines (e.g., model inference latency, resource utilization). Continuous Testing & CI/CD Integration Establish and maintain continuous testing pipelines integrated with GitHub Actions, Jenkins, or GitLab CI/CD. Implement shift-left testing by embedding automated checks into development workflows (e.g., unit tests, contract testing). AI/ML Model Validation Collaborate with data scientists to test AI/ML models for accuracy, fairness, stability, and bias mitigation using tools like TensorFlow Model Analysis or MLflow. Validate model drift and retraining pipelines to ensure consistent performance in production. Quality Metrics & Reporting Define and track KPIs: Test coverage (code, data, scenarios) Defect leakage rate Automation ROI (time saved vs. maintenance effort) Model accuracy thresholds Report risks and quality trends to stakeholders in sprint reviews. Drive adoption of AI-specific testing tools (e.g., LangChain for LLM testing, Great Expectations for data validation). Technical Requirements Must-Have 5–8 years in test automation, with 2+ years validating AI/ML systems. Expertise in: Automation tools: Selenium, Playwright, Cypress, REST Assured, Locust/JMeter CI/CD: Jenkins, GitHub Actions, GitLab AI/ML testing: Model validation, drift detection, GenAI output evaluation Languages: Python, Java, or JavaScript Certifications: ISTQB Advanced, CAST, or equivalent. Experience with MLOps tools: MLflow, Kubeflow, TFX Familiarity with vector databases (Pinecone, Milvus) and RAG workflows. Strong programming/scripting experience in JavaScript, Python, Java, or similar Experience with API testing, UI testing, and automated pipelines Understanding of AI/ML model testing, output evaluation, and non-deterministic behavior validation Experience with testing AI chatbots, LLM responses, prompt engineering outcomes, or AI fairness/bias Familiarity with MLOps pipelines and automated validation of model performance in production Exposure to Agile/Scrum methodology and tools like Azure Boards Soft Skills Strong problem-solving skills for balancing speed and quality in fast-paced AI development. Ability to communicate technical risks to non-technical stakeholders. Collaborative mindset to work with cross-functional teams (data scientists, ML engineers, DevOps).
Posted 4 weeks ago
6.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Job Description Job Title : Senior Data Scientist - 6+years Location : Mumbai, India Notice period : Max 45 days' notice period we can consider Must needed Team Handling experience . · Apply advanced machine learning/statistical algorithms scalable to huge data sets to: • Determine the most meaningful ad, served to the right user at the optimal time, and the best price • Identify behaviors, interests and segments of web users across billions of transactions to find the most optimal audience for a given advertising activity • Eliminate suspicious / non-human traffic • Maintenance our products, support customers in analyzing the reasons behind the decisions made by our algorithms and look for new improvements in the process of striving for their excellence · Work closely with other Data Scientists, development, and product teams to implement algorithms into production-level software. Mentor less experienced colleagues. · Contribute to identify opportunities for leveraging company data to drive business solutions · Be an active technical challenger in the team for the purpose of mutual improvement and broadening of team and company horizons · Design solutions and lead cross-functional technical projects from ideation to deployment . · Excellent mathematical and statistical skills (statistical inference), experience in working with large datasets. Knowledge of data pipelines, ETL processes. · Very good knowledge of multiply supervised and unsupervised machine learning techniques with math background and hands-on experience · Great problem-solving and analytical skills. · Ability to structure a large business problem into tractable and reasonable components, design and deploy scalable machine learning solutions · Proficiency in Python, SQL · Experience with big data tools (e.g. Spark, Hadoop) Interested candidate can share there Resume on komal.aher@thesearchfield.com
Posted 4 weeks ago
0 years
0 Lacs
India
Remote
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. The mission of the Platform Product Group engineers is to build a trusted, scalable and compliant platform to operate with speed, efficiency and quality. Our teams build and maintain the platforms critical to the existence of Coinbase. There are many teams that make up this group which include Product Foundations (i.e. Identity, Payment, Risk, Proofing & Regulatory, Finhub), Machine Learning, Customer Experience, and Infrastructure. As a Staff Machine Learning Platform Engineer at Coinbase, you will play a pivotal role in building an open financial system. The team builds the foundational components for training and serving ML models at Coinbase. Our platform is used to combat fraud, personalize user experiences, and to analyze blockchains. We are a lean team, so you will get the opportunity to apply your software engineering skills across all aspects of building ML at scale, including stream processing, distributed training, and highly available online services. What you’ll be doing (ie. job duties): Form a deep understanding of our Machine Learning Engineers’ needs and our current capabilities and gaps. Mentor our talented junior engineers on how to build high quality software, and take their skills to the next level. Continually raise our engineering standards to maintain high-availability and low-latency for our ML inference infrastructure that runs both predictive ML models and LLMs. Optimize low latency streaming pipelines to give our ML models the freshest and highest quality data. Evangelize state-of-the-art practices on building high-performance distributed training jobs that process large volumes of data. Build tooling to observe the quality of data going into our models and to detect degradations impacting model performance. What we look for in you (ie. job requirements): 10+ yrs of industry experience as a Software Engineer. You have a strong understanding of distributed systems. You lead by example through high quality code and excellent communication skills. You have a great sense of design, and can bring clarity to complex technical requirements. You treat other engineers as a customer, and have an obsessive focus on delivering them a seamless experience. You have a mastery of the fundamentals, such that you can quickly jump between many varied technologies and still operate at a high level. Nice to Have: Experience building ML models and working with ML systems. Experience working on a platform team, and building developer tooling. Experience with the technologies we use (Python, Golang, Ray, Tecton, Spark, Airflow, Databricks, Snowflake, and DynamoDB). Job #: GPBE06IN *Answers to crypto-related questions may be used to evaluate your onchain experience Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here.
Posted 4 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Software Architect – Artificial Intelligence Experience: 5+ years Location: Hybrid (India) | Bengaluru, NCR, or Hyderabad The Role Our client is building the infrastructure layer for AI in hospitals. As Software Architect , you’ll lead the technical design and evolution of our AI platform—one that enables reasoning agents, handles population-scale data, and powers clinical workflows with intelligence and speed. This role combines hands-on engineering with strategic influence. You’ll define core abstractions, scale systems across environments, and enable a world-class team of engineers to build on top of the foundation you lay. What You’ll Own Architect scalable AI systems—from data ingestion and orchestration to model inference and observability Design and evolve the platform’s agentic core: integrate models, tools, reasoning engines, and feedback loops Build clean, reusable APIs and frameworks that product and clinical teams can rely on Work across the product lifecycle: from user needs and hospital feedback to iteration and deployment Mentor engineers, enforce high standards, and navigate tough technical trade-offs in a startup environment What You Bring 5+ years leading the design of complex, mission-critical systems at scale Strong experience with LLM-based or agent-driven architectures , especially in secure, compliance-bound environments Proven ability to set technical direction while staying hands-on with architecture, code, and reviews Depth in backend infrastructure: cloud-native systems, data pipelines, deployment workflows, monitoring Excellent communication, decision-making, and mentoring skills A degree in computer science or a related field Bonus: Experience training or fine-tuning custom AI models Who This Is For You’re a system-level thinker who sees complexity as a challenge, not a blocker. You want to build with purpose—and you’re comfortable shaping both the codebase and the engineering culture that defines how it grows. If you’re looking to lead from the front, solve messy real-world problems, and work alongside world-class builders—this is your kind of role. Reach out via DM or write to sophia.d@thecheckmatepartners, aditi.p@thecheckmatepartners.com
Posted 4 weeks ago
0 years
0 Lacs
Madurai, Tamil Nadu, India
On-site
Role : AIML Engineer Location : Madurai/ Chennai Language: Python DBs : SQL Core Libraries: Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet SOTA ML : ML Models, Boosting & Ensemble models etc. Explainability : Shap / Lime Required skills: Deep Learning: PyTorch, PyTorch Forecasting, Data Processing: Pandas, NumPy, Polars (optional), PySpark Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow Serving: TorchServe, Sagemaker endpoints / batch Containerization: Docker Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines AWS Services: SageMaker (Training, Inference, Tuning) S3 (Data Storage) CloudWatch (Monitoring) Lambda (Trigger-based Inference) ECR, ECS or Fargate (Container Hosting)
Posted 4 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the company Were hiring for a deep tech startup at the cutting edge of AI + Quantum Computing , offering a fully integrated stack from superconducting quantum hardware to AI powered software platforms for solving extremely complex business and scientific challenges across industries like drug discovery, logistics, chip design, energy, and climate modeling. The companys 25‑qubit superconducting quantum system is India’s first full stack build, backed by the National Quantum Mission and enterprise partnerships What You'll Do Design and build multi‑agent LLM systems for complex reasoning workflows Design domain specific scaffolds (prompts, datasets) and robust evaluation frameworks to guide AI systems in specialized environments Implement reinforcement learning enhancements (RLHF, DPO, GRPO, SFT) for agent optimization Fine tune and deploy small reasoning models; perform post‑training domain adaptation Engineer scalable training/inference pipelines on multi node GPU clusters with containerized infrastructure Collaborate with product and vertical teams to transition research into real world applications Contribute to publications, open source work, and internal learning across the company Who We're Looking For MS or PhD in CS, ML, AI or a related technical field 4+ years in applied AI research Proficient in Python + PyTorch or JAX, including scaled or custom implementations Practical experience with RL techniques like RLHF, DPO, GRPO, SFT Exposure to chain of thought prompting and dataset creation Demonstrated experience building and deploying multi‑agent systems Experience with distributed infrastructure: Docker/Kubernetes, Ray, Hydra, MLflow Bonus if you have publications (NeurIPS, ICLR, ICML, AAMAS) or domain experience in drug discovery, materials, quantum control, chip design Open source contributions to AI research or tooling Experience optimizing large scale models for inference and deployment. What We Offer Work on world class AI Quantum systems that push scientific and industrial boundaries Access to one of the country's most advanced quantum systems + top tier GPU/HPC infrastructure Ownership and flexibility, define your research direction and release real world impact Competitive package with performance bonus and ESOPs Collaborative, knowledge sharing culture with industry leading experts If you fit the qualifications and are passionate about pushing the frontier of AI + Quantum, we’d love to hear from you! Know someone perfect for this? Tag them or share this post in your network we’re building something extraordinary together
Posted 4 weeks ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. As a Principal Software Engineer for Data, the person will lead the design and implementation of scalable, secure, and high-performance data pipelines across that involve healthcare clinical data, using modern big data and cloud technologies (Azure, Databricks, and Spark), ensuring alignment with UnitedHealth Group’s data governance standards. This role requires a hands-on leader who can write and review code, mentor teams, and collaborate across business and technical stakeholders to drive data strategy and innovation. The person needs to be ready to take up AI and AIOps as part of their work and support the data science teams with ideas and reviews their work. Primary Responsibilities Design and lead the implementation of robust, scalable, and secure data architectures for clinical and healthcare data for batch and real time pipelines Architect end-to-end data pipelines using big data and cloud-native technologies (e.g., Spark, Databricks, Azure Data Factory) Ensure data solutions meet performance, scalability, and compliance requirements, including HIPAA and internal governance policies Build and optimize data ingestion, transformation, and storage pipelines for structured and unstructured clinical data. Guide teams that are doing it and ensure support for incremental data processing Ensure data quality, lineage is embedded in all solutions Lead code reviews, proof-of-concepts, and performance tuning for large-scale data systems Collaborate with data governance teams to ensure adherence to UHG and healthcare data standards, lineage, certification, Data use rights, and data privacy Contribute to the maturity of data governance domains and participate in governance councils and working groups Design, Build and monitor MLOps pipelines, model inference and robust piplelines for running AI operations on data Secondary Responsibilities Mentor data engineers and analysts, fostering a culture of technical excellence and continuous learning Collaborate with product managers, data scientists, and business stakeholders to translate requirements into data solutions Influence architectural decisions across teams and contribute to enterprise-wide data strategy Stay current with emerging technologies in cloud, big data, and AI/ML, and evaluate their applicability to healthcare data Promote the use of generative AI tools (e.g., GitHub Copilot) to enhance development productivity and innovation Drive adoption of DevOps and DataOps practices, including CI/CD, IaC, and automated testing for data pipelines Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Cloud Platforms: Solid experience with Azure (preferred), AWS, or GCP Experience with designing and managing semantic data elements (metadata, configuration, master data). Come up with automated pipelines to keep them up-to-date from upstream sources Good experience with designing, evolving and reviewing database schema. Experience with schema management for unstructured data, structured data, relational, star schema Data Modelling: Deep understanding of dimensional modeling, canonical models, and healthcare data standards (e.g., HL7, FHIR) DevOps/DataOps: Familiarity with CI/CD, IaC (Terraform, ARM) Data Engineering: Expertise in building ETL/ELT pipelines, data lakes, and real-time streaming architectures using python, scala or other comparable technologies Big Data Technologies: Proficient in Apache Spark, Databricks, Delta Lake, and distributed data processing Programming: Proficiency in Python, SQL, and optionally Scala or Java Proven track record of designing and delivering large-scale data solutions in cloud environments Proven solid leadership, communication, and stakeholder management skills Proven ability to mentor and influence across teams and levels Proven strategic thinker with a passion for data-driven innovation Proven ability to get into details whenever required and spend time in understanding and solving problems Preferred Qualifications 10+ years of experience in data architecture, data engineering, or related roles, with a focus on healthcare or clinical data Experience with healthcare data interoperability standards (FHIR, HL7, CCD) Familiarity with MLOps and integrating data pipelines with ML workflows Contributions to open-source projects or publications in data architecture or healthcare analytics At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 4 weeks ago
3.0 years
0 Lacs
Karnal, Haryana, India
Remote
🏢 Company Description Live Eye Surveillance is a U.S.-focused AI surveillance and remote monitoring company, headquartered in Seattle with technology operations in India. We specialize in real-time, proactive security solutions for retail, QSRs, warehouses, and commercial spaces. Our in-house AI-powered Video Management Software (VMS) integrates advanced IP camera systems, intelligent video analytics, live audio deterrence, and 24/7 human monitoring to deter crime before it happens. With enterprise-grade infrastructure and a commitment to reducing shrinkage, liability, and manpower costs, Live Eye is redefining modern surveillance for multi-location businesses. ⸻ 💼 Role: AI/ML Lead – Facial Recognition & Video Intelligence Location: Karnal, Haryana Employment Type: Full-Time Experience: 3+ Years in AI/ML, Computer Vision, and Team Leadership ⸻ 🧠 Role Description We are looking for an AI/ML Lead to spearhead the development of cutting-edge Facial Recognition, Object Detection, and Video Intelligence features for our proprietary VMS platform. You will lead the research, development, and optimization of AI models deployed across real-time IP camera feeds. You’ll also manage a small team of AI/ML engineers and work closely with our backend, frontend, and mobile teams to build scalable, production-ready AI modules that directly impact global security operations. ⸻ 🧪 Responsibilities • Build and deploy production-grade models for facial recognition, object/person detection, and activity recognition • Optimize AI pipelines for real-time performance and edge device compatibility • Lead, mentor, and manage a small AI/ML engineering team • Collaborate with product managers, software engineers, and cloud architects to integrate AI modules into the VMS platform • Stay on top of the latest developments in deep learning and computer vision research • Ensure model accuracy, efficiency, and scalability across diverse real-world environments ⸻ 🔧 Required Qualifications • 3+ years of hands-on experience in Machine Learning and Deep Learning (preferably in Computer Vision) • Proficiency in Python, TensorFlow/PyTorch, OpenCV, and relevant DL libraries • Strong background in building models for facial recognition (FaceNet, ArcFace, etc.) and object detection (YOLO, SSD, Faster R-CNN) • Experience with ONNX, TensorRT, or other optimization tools for edge inference • Experience integrating AI models with IP camera feeds (RTSP/ONVIF protocols preferred) • Solid understanding of data preprocessing, model evaluation, and tuning • Strong communication, problem-solving, and team leadership skills • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field ⸻ 🌐 Nice to Have • Experience with Jetson Nano/Xavier, Edge TPUs, or other AI hardware • Background in surveillance systems, security platforms, or VMS architecture • Familiarity with cloud platforms like AWS, Azure, or GCP • Experience with Docker, Git, and CI/CD for deploying ML models ⸻ 🌟 What We Offer • Leadership opportunity in a rapidly growing AI security tech company • Hands-on role in building core IP for our next-gen surveillance platform • Flexible hybrid work setup with direct impact on global deployments • Competitive compensation and opportunity for fast growth ⸻ 📩 To apply: Send your resume to careers@myliveeye.com 🌐 Visit: www.myliveeye.com
Posted 4 weeks ago
8.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Role Overview We are hiring a Technical Lead – AI Security to join our CISO team in Mumbai. This is a critical, hands-on role — ensuring the trustworthiness, resilience, and compliance of AI/ML systems, including large language models (LLMs). You will work at the intersection of cybersecurity and AI, shaping secure testing, understanding secure MLOps/LLMOps workflows, and leading technical implementation of defenses against emerging AI threats. This role requires both strategic vision and strong engineering depth. Key Responsibilities · Lead and operationalize the AI/ML and LLM security roadmap across training, validation, deployment, and runtime to enable AI Security Platform Approach. · Design and implement defenses against threats like adversarial attacks, data poisoning, model inversion, prompt injection, and fine-tuning exploits using industry leading open source and commercial tools. · Build hardened workflows for model security, integrity verification, and auditability in production AI environments. · Leverage AI security tools for scanning, fuzzing, and penetration testing models. · Apply best practices from OWASP Top 10 for ML/LLMs, MITRE ATLAS, NIST AI RMF, and ISO/IEC 42001 to test AI/ML assets. · Ensure AI model security testing framework aligns with internal policy, national regulatory requirements, and global best practices. · Plan and execute security tests for AI/LLM systems, including jailbreaking, RAG hardening, and bias/toxicity validation. Required Skills & Experience · 8+ years in cybersecurity, with at least 3+ years hands-on in AI/ML security or secure MLOps/LLMOps · Proficient in Python, TensorFlow/PyTorch, HuggingFace, LangChain, and common data science libraries · Deep understanding of adversarial ML/LLM, model evaluation under threat conditions, and inference/training-time attack vectors · Experience securing cloud-based AI workloads (AWS, Azure, or GCP) · Familiarity with secure DevOps and CI/CD practices · Strong understanding of AI-specific threat models (MITRE ATLAS) and security benchmarks (OWASP Top 10 for ML/LLMs) · Ability to communicate technical risk clearly to non-technical stakeholders · Ability to guide developers and data scientists to solve the AI Security risks. · Certifications: CISSP, OSCP, GCP ML Security, or relevant AI/ML certificates · Experience with AI security tools or platforms (e.g., model registries, lineage tracking, policy enforcement) · Experience with RAG, LLM-based agents, or agentic workflows · Experience in regulated sectors (finance, public sector)
Posted 4 weeks ago
5.0 years
0 Lacs
India
Remote
AdZeta is a B2B technology company that leverages AI-powered smart bidding technology to drive high LTV and profitability for D2C e-commerce brands. We turn first-party data into predictive, value-based bidding and personalised customer journeys. To accelerate our roadmap, we’re hiring a Senior Data Engineer to architect the data pipelines, services, and admin tools that power it all. What You’ll Own Secure Data Ingestion Design & implement high-throughput, encrypted pipelines that pull data from Shopify, GA4, CRMs, and ad platforms. Enforce token rotation, rate limiting, and schema validation. Data Storage Foundations Stand up the initial data-lake / warehouse layer (MySQL + object storage; moving to BigQuery or Snowflake). Define partitioning, indexing, and lifecycle policies for multi-TB datasets. Prediction API Build REST/JSON endpoints that expose LTV and propensities from our ML models. Optimise for low-latency inference and auto-scale under load. Admin Panel Development Ship the first-generation admin portal in PHP + MySQL (Laravel or similar) Implement RBAC, audit logging, and health dashboards for internal teams. Infrastructure & DevOps Provision and harden servers (AWS or GCP) using Terraform / CloudFormation. Own CI/CD, container orchestration (Docker, ECS or Kubernetes), monitoring (Grafana/Prometheus), and incident response run-books. Required Skills & Experience 5+ years building scalable backend systems (PHP, Python, or Node preferred). Strong database chops—MySQL/PostgreSQL schema design, query optimisation, and replication. Experience with message queues / streaming (Kafka, Pub/Sub, or SQS). Comfortable in cloud infrastructure (AWS or GCP), IaC (Terraform, Pulumi), and containerisation. Solid understanding of security best practices: TLS, IAM, secrets management, OWASP. Proven track record integrating third-party APIs and handling large data volumes. Bonus: exposure to ML inference pipelines, Looker/BI tools, or server-side tagging. Why AdZeta Remote-first & async-friendly culture with flexible PTO. Ownership: competitive salary + equity option pool. Annual learning stipend for certs, conferences, or AI experimentation. Direct line of sight to C-suite; your insights shape product and go-to-market road-maps. We move fast, targeting a two-week turnaround from application to offer.
Posted 4 weeks ago
2.0 - 6.0 years
1 - 3 Lacs
Hyderābād
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
2.0 - 6.0 years
1 - 3 Lacs
Gurgaon
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
4.0 - 10.0 years
5 - 10 Lacs
Noida
On-site
Lead Assistant Manager EXL/LAM/1412764 Data And Analytics ServicesNoida Posted On 05 Jul 2025 End Date 19 Aug 2025 Required Experience 4 - 10 Years Basic Section Number Of Positions 4 Band B2 Band Name Lead Assistant Manager Cost Code D014377 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 1500000.0000 - 2400000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Healthcare Organization Data And Analytics Services LOB Analytics SBU Services Country India City Noida Center Noida-SEZ BPO Solutions Skills Skill GCP GCP/AWS/CI - CD/DEVOPS AI PYTHON SQL Minimum Qualification B.TECH/B.E MCA Certification No data available Job Description Cloud AI Engineer We're looking for a highly skilled and experienced Cloud AI Engineer to join our dynamic team. In this role, you'll be instrumental in designing, developing, and deploying cutting-edge artificial intelligence and machine learning solutions leveraging the full suite of Google Cloud Platform (GCP) services. Objectives of this role Lead the end-to-end development cycle of AI applications, from conceptualization and prototyping to deployment and optimization, with a core focus on LLM-driven solutions. Architect and implement highly performant and scalable AI services, effectively integrating with GCP's comprehensive AI/ML ecosystem. Collaborate closely with product managers, data scientists, and MLOps engineers to translate complex business requirements into tangible, AI-powered features. Continuously research and apply the latest advancements in LLM technology, prompt engineering, and AI frameworks to enhance application capabilities and performance. ## Responsibilities Develop and deploy production-grade AI applications and microservices primarily using Python and FastAPI, ensuring robust API design, security, and scalability. Design and implement end-to-end LLM pipelines, encompassing data ingestion, processing, model inference, and output generation. Utilize Google Cloud Platform (GCP) services extensively, including Vertex AI (Generative AI, Model Garden, Workbench), Cloud Functions, Cloud Run, Cloud Storage, and BigQuery, to build, train, and deploy LLMs and AI models. Expertly apply prompt engineering techniques and strategies to optimize LLM responses, manage context windows, and reduce hallucinations. Implement and manage embeddings and vector stores for efficient information retrieval and Retrieval-Augmented Generation (RAG) patterns. Work with advanced LLM orchestration frameworks such as LangChain, LangGraph, Google ADK, and CrewAI to build sophisticated multi-agent systems and complex AI workflows. Integrate AI solutions with other enterprise systems and databases, ensuring seamless data flow and interoperability. Participate in code reviews, establish best practices for AI application development, and contribute to a culture of technical excellence. Keep abreast of the latest advancements in GCP AI/ML services and broader AI/ML technologies, evaluating and recommending new tools and approaches. ## Required skills and qualifications Two or more years of hands-on experience as an AI Engineer with a focus on building and deploying AI applications, particularly those involving Large Language Models (LLMs). Strong programming proficiency in Python, with significant experience in developing web APIs using FastAPI. Demonstrable expertise with Google Cloud Platform (GCP), specifically with services like Vertex AI (Generative AI, AI Platform), Cloud Run/Functions, and Cloud Storage. Proven experience in prompt engineering, including advanced techniques like few-shot learning, chain-of-thought prompting, and instruction tuning. Practical knowledge and application of embeddings and vector stores for semantic search and RAG architectures. Hands-on experience with at least one major LLM orchestration framework (e.g., LangChain, LangGraph, CrewAI). Solid understanding of software engineering principles, including API design, data structures, algorithms, and testing methodologies. Experience with version control systems (Git) and CI/CD pipelines. Preferred skills and qualifications Bachelor’s or Master's degree in Computer Science Good to have: Experience with MLOps practices for deploying, monitoring, and maintaining AI models in production. Understanding of distributed computing and data processing technologies. Contributions to open-source AI projects or a strong portfolio showcasing relevant AI/LLM applications. Excellent analytical and problem-solving skills with a keen attention to detail. Strong communication and interpersonal skills, with the ability to explain complex technical concepts to non-technical stakeholders. Workflow Workflow Type L&S-DA-Consulting
Posted 1 month ago
2.0 - 6.0 years
1 - 3 Lacs
Ahmedabad
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: As our Agentic System Architect, you will define and own the end-to-end architecture of our Python-based autonomous agent platform. Leveraging cutting-edge frameworks—LangChain, LangGraph, RAG pipelines, and more—you’ll ensure our multi-agent workflows are resilient, scalable, and aligned with business objectives Key Responsibilities Architectural Strategy & Standards Define system topology: microservices, agent clusters, RAG retrieval layers, and knowledge-graph integrations. Establish architectural patterns for chain-based vs. graph-based vs. retrieval-augmented workflows. Component & Interface Design Specify Python modules for LLM integration, RAG connectors (Haystack, LlamaIndex), vector store adapters, and policy engines. Design REST/gRPC and message-queue interfaces compatible with Kafka/RabbitMQ, Semantic Kernel, and external APIs. Scalability & Reliability Architect auto-scaling of Python agents on Kubernetes/EKS (including GPU-enabled inference pods). Define fault-tolerance patterns (circuit breakers, retries, bulkheads) and lead chaos-testing of agent clusters. Security & Governance Embed authentication/authorization in agent flows (OIDC, OAuth2) and secure data retrieval (encrypted vector stores). Implement governance: prompt auditing, model-version control, drift detection, and usage quotas. Performance & Cost Optimization Specify profiling/tracing requirements (OpenTelemetry in Python) across chain, graph, and RAG pipelines. Architect caching layers and GPU/CPU resource policies to minimize inference latency and cost. Cross-Functional Leadership Collaborate with AI research, DevOps, and product teams to align roadmaps with strategic goals. Review and enforce best practices in Python code, CI/CD (GitHub Actions), and IaC (Terraform). 7. Documentation & Evangelism Produce architecture diagrams, decision records, and runbooks illustrating agentic designs (ReAct, CoT, RAG). Mentor engineers on agentic patterns—chain-of-thought, graph traversals, retrieval loops—and Python best practices. Preferred Qualifications Bachelor’s Degree in Computer Science, Information Technology, or related fields (e.g., B.Tech, B.E., B.Sc. in Computer Science) Preferred/Ideal Educational Qualification: Master’s Degree (optional but highly valued) in one of the following: M.Tech or M.E. in Computer Science / AI / Data Science M.Sc. in Artificial Intelligence or Machine Learning Integrated M.Tech programs in AI/ML from top-tier institutions like IITs, IIIT-H, IISc Bonus or Value-Add Qualifications: Ph.D. or Research Experience in NLP, Information Retrieval, or Agentic AI (especially relevant if applying to R&D-heavy teams like Microsoft Research, TCS Research, or AI startups) Certifications or online credentials in: LangChain, RAG architectures (DeepLearning.AI, Cohere, etc.) Advanced Python (Coursera/edX/Springboard/NPTEL) Cloud-based ML operations (AWS/Azure/GCP) Additional Skill Set: Hands-on with agentic frameworks: LangChain, LangGraph, Microsoft AutoGen Experience building RAG pipelines with Haystack, LlamaIndex, or custom retrieval modules Familiarity with vector databases (FAISS, Pinecone, Chroma) and knowledge-graph stores (Neo4j) Expertise in observability stacks (Prometheus, Grafana, OpenTelemetry) Background in LLM SDKs (OpenAI, Anthropic) and function-calling paradigms Core Skills & Competencies System Thinking: Decompose complex business goals into modular, maintainable components Python Mastery: Idiomatic Python, async/await, package management (Poetry/venv) Distributed Design: Microservices, agent clusters, RAG retrieval loops, event streams Security-First: Embed authentication, authorization, and auditabilitys Leadership: Communicate complex system designs clearly to both technical and non-technical stakeholders We are looking for someone with a proven track record in leveraging cuing-edge agentic frameworks and protocols. This includes hands-on experience with technologies such as Agent-to-Agent (A2A) communication protocols, LangGraph, LangChain, CrewAI, and other similar multi-agent orchestration tools. Your expertise will be crucial in transforming traditional, reactive AI applications into proactive, goal-driven intelligent agents that can signicantly enhance operational eciency, decision-making, and customer engagement in high-stakes domains. We envision this role as instrumental in driving innovation, translating cuing-edge academic research into deployable solutions, and contributing to the development of robust, scalable, and ethical AI agentic systems.
Posted 1 month ago
0 years
0 Lacs
Bengaluru North, Karnataka, India
Remote
Job Description GalaxEye Space, is a deep-tech Space start-up spun off from IIT-Madras and is currently based in Bengaluru, Karnataka. We are dedicated to advancing the frontiers of space exploration. Our mission is to develop cutting-edge solutions that address the challenges of the modern space industry by specialising in developing a constellation of miniaturised, multi-sensor SAR+EO satellites. Our new age technology enables all-time, all-weather imaging, this with leveraging advanced processing and AI capabilities, we ensure near real-time data delivery and are glad to highlight that we have successfully demonstrated these imaging capabilities, the first of its kind in the world, across various platforms such as Drones as well as HAPS (High-Altitude Pseudo Satellites). Responsibilities Architect and maintain the build pipeline that converts R&D Python notebooks into immutable, versioned executables and libraries Optimize the Python codes for extracting maximum GPU performance Define and enforce coding standards, branching strategy, semantic release tags, and artifact-signing process Lead a team of full-stack developers to integrate Python inference services with the React-Electron UI via gRPC/REST contracts Stand-up and maintain an offline replica environment (VM or bare- metal) that mirrors the forward-deployed system; gate releases through this environment in CI Own automated test suites: unit, contract, regression, performance, and security scanning Coordinate multi-iteration hand-offs with forward engineers; triage returned diffs, merge approved changes, and publish patched releases Mentor the team, conduct code & design reviews, and drivecontinuous-delivery best practices in an air-gap-constrained context Requirements 5+ yrs in software engineering with at least 2 yrs technical-lead experience Deep Python expertise (packaging, virtualenv/venv, dependency pinning) and solid JavaScript/TypeScript skills for React-Electron CI/CD mastery (GitHub Actions, Jenkins, GitLab CI) with artifact repositories (Artifactory/Nexus) and infrastructure-as-code (Packer, Terraform, Ansible) Strong grasp of cryptographic signing, checksum verification, and secure supply-chain principles Experience releasing software to constrained or disconnected environments Additional Skills Knowledge of containerization (Docker/Podman) and offline image distribution Prior work on remote-sensing or geospatial analytics products Benefits Acquire valuable opportunities for learning and development through close collaboration with the founding team. Contribute to impactful projects and initiatives that drive meaningful change. We provide a competitive salary package that aligns with your expertise and experience. Enjoy comprehensive health benefits, including medical, dental, and vision coverage, ensuring the well-being of you and your family. Work in a dynamic and innovative environment alongside a dedicated and passionate team. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#5BBD6E;border-color:#5BBD6E;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered="">
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About The Role Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s In It For You Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good To Have Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
Data Engineer Gurgaon, India; Ahmedabad, India; Hyderabad, India; Virtual, Gurgaon, India Information Technology 317426 Job Description About The Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
0.0 - 6.0 years
0 Lacs
Gurugram, Haryana
On-site
About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317426 Posted On: 2025-07-06 Location: Gurgaon, Haryana, India
Posted 1 month ago
5.0 years
1 - 2 Lacs
Bengaluru, Karnataka, India
On-site
Experience Level : 3–5 Years Location : Bengaluru, India About The Role We are looking for a driven and experienced Machine Learning Engineer to join our team and help push the boundaries of what’s possible with Large Language Models (LLMs) and intelligent agents. This is a hands-on role for someone with a strong background in LLM tooling, evaluation, and data engineering, and a deep appreciation for building reusable, scalable, and open solutions. You’ll work across the stack—from agent and tool design, model evaluation, and dataset construction to serving infrastructure and fine-tuning pipelines. We're especially excited about candidates who have made meaningful contributions to open-source LLM/AI infrastructure and want to build foundational systems used by others across the ecosystem. Key Responsibilities Design, build, and iterate on LLM-powered agents and tools, from prototypes to production. Develop robust evaluation frameworks, benchmark suites, and tools to systematically test LLM behaviors. Construct custom evaluation datasets, both synthetic and real-world, to validate model outputs at scale. Build scalable, production-grade data pipelines using Apache Spark or similar frameworks. Work on fine-tuning and training workflows for open-source and proprietary LLMs. Integrate and optimize inference using platforms like vLLM, llama.cpp, and related systems. Contribute to the development of applications, emphasizing composability, traceability, and modularity. Actively participate in and contribute to open-source projects within the LLM/agent ecosystem. Requirements Must-Have Skills: 3–5 years of experience in machine learning, with a strong focus on LLMs, agent design, or tool building. Demonstrable experience building LLM-based agents, including tool usage, planning, and memory systems. Proficiency in designing and implementing evaluation frameworks, metrics, and pipelines. Strong data engineering background, with hands-on experience in Apache Spark, Airflow, or similar tools. Familiarity with serving and inference systems like vLLM, llama.cpp, or TensorRT-LLM. Deep understanding of building componentized ML systems. Open-Source Contributions Proven track record of contributing to open-source repositories related to LLMs, agent frameworks, evaluation tools, or model training. Experience maintaining your own open-source libraries or tooling is a major plus. Strong Git/GitHub practices, code documentation, and collaborative PR workflows. You’ll be expected to build tools, frameworks, or agents that may be released back to the community when possible. Nice-to-Have Familiarity with LLM orchestration frameworks like LangChain, CrewAI/AutoGen, Haystack, or DSPy. Experience training or fine-tuning models using LoRA, PEFT, or full-scale distributed training. Experience deploying LLM applications at scale in cloud or containerized environments (e.g., AWS, Kubernetes, Docker). What We Offer The opportunity to work on state-of-the-art LLM and agent technologies. Encouragement and support for open-source contributions as part of your day-to-day. A fast-paced, collaborative, and research-focused environment. Influence over architectural decisions in a rapidly evolving space. To Apply Please submit your resume and links to your GitHub, open-source projects, or public technical writing (blog posts, talks, etc.)
Posted 1 month ago
0 years
0 Lacs
India
Remote
If you’re looking to gain real work experience and learn industry-leading workflows from senior software engineers at Microsoft, Google, Meta, and top unicorns, while working hands-on with production-scale projects, TechX’s Engineering Apprenticeship is for you. About TechX TechX bridges academic theory and industry practice. Our mission is to give you verified work experience, not just certificates, by embedding you in live codebases alongside ex-FAANG tech leads and senior architects. What You’ll Work On Own one focus area from design through deployment: Large-Scale Web Architecture Architect fault-tolerant, highly available systems Build efficient data pipelines and caching layers Tune performance under real user load Implement monitoring, logging, and alerting LLM Engineering Ingest and preprocess massive datasets Build and fine-tune transformer models Develop GPU/TPU training and inference workflows Deploy scalable inference endpoints with autoscaling Data Science & MLOps Craft end-to-end analytics pipelines (wrangling → modeling → viz) Train and validate ML models in production Set up CI/CD for data and model versioning Monitor model drift, performance, and costs How We Work Agile Scrum Meetings: Participate in sprint planning, daily stand-ups, and retrospectives with your mentor “team.” Hands-On Development: Push code, review PRs, and ship features in live repos. One-on-One Mentorship: Weekly pairing sessions with senior engineers who’ve shipped at scale. Code Reviews & Feedback: Get actionable guidance on design, code quality, testing, and CI/CD pipelines. Who Should Apply Recent CS/Engineering grads or career-changers craving more work experience Proficient in at least one backend language (C#, Java, Go, Python, etc.) Solid grasp of data structures, algorithms, and networking Self-motivated, able to commit ≈20 hrs/week Ready to learn FAANG-style best practices and workflows Why This Apprenticeship? Real Work Experience: Gain work experience that you can list on your resume. Industry Connections: Direct referrals and introductions to partner hiring teams. Ongoing Support: Program continues until you secure a full-time engineering role. Career Coaching: Built-in mock interviews, resume reviews, and job-search strategy. Program Details Type: Educational apprenticeship (not employment; no wages or benefits) Location: 100% Remote Duration: Until placement in a full-time role (average 3–6 months) Commitment: ≈20 hrs/week Spaces are limited, apply today to start writing code that matters and fast-track your engineering career!
Posted 1 month ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description JOB RESPONSIBILITY : Collaborate with cross-functional teams, including data scientists and product managers, to acquire, process, and manage data for AI/ML model integration and optimization. Design and implement robust, scalable, and enterprise-grade data pipelines to support state-of-the-art AI/ML models. Debug, optimize, and enhance machine learning models, ensuring quality assurance and performance improvements. Operate container orchestration platforms like Kubernetes, with advanced configurations and service mesh implementations, for scalable ML workload deployments. Design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engage in advanced prompt engineering and fine-tuning of large language models (LLMs), focusing on semantic retrieval and chatbot development. Document model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Research and implement cutting-edge LLM optimization techniques, such as quantization and knowledge distillation, ensuring efficient model performance and reduced computational costs. Collaborate closely with stakeholders to develop innovative and effective natural language processing solutions, specializing in text classification, sentiment analysis, and topic modeling. Stay up-to-date with industry trends and advancements in AI technologies, integrating new methodologies and frameworks to continually enhance the AI engineering function. Contribute to creating specialized AI solutions in healthcare, leveraging domain-specific knowledge for task adaptation and : Minimum education: Bachelors degree in any Engineering Stream Specialized training, certifications, and/or other special requirements: Nice to have Preferred education: Computer : Minimum relevant experience - 4+ years in AI AND COMPETENCIES Skills : Advanced proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) Extensive experience with LLM frameworks (Hugging Face Transformers, LangChain) and prompt engineering techniques Experience with big data processing using Spark for large-scale data analytics Version control and experiment tracking using Git and MLflow Software Engineering & Development: Advanced proficiency in Python, familiarity with Go or Rust, expertise in microservices, test-driven development, and concurrency processing. DevOps & Infrastructure: Experience with Infrastructure as Code (Terraform, CloudFormation), CI/CD pipelines (GitHub Actions, Jenkins), and container orchestration (Kubernetes) with Helm and service mesh implementations. LLM Infrastructure & Deployment: Proficiency in LLM serving platforms such as vLLM and FastAPI, model quantization techniques, and vector database management. MLOps & Deployment: Utilization of containerization strategies for ML workloads, experience with model serving tools like TorchServe or TF Serving, and automated model retraining. Cloud & Infrastructure: Strong grasp of advanced cloud services (AWS, GCP, Azure) and network security for ML systems. LLM Project Experience: Expertise in developing chatbots, recommendation systems, translation services, and optimizing LLMs for performance and security. General Skills: Python, SQL, knowledge of machine learning frameworks (Hugging Face, TensorFlow, PyTorch), and experience with cloud platforms like AWS or GCP. Experience in creating LLD for the provided architecture. Experience working in microservices based Expertise : Strong mathematical foundation in statistics, probability, linear algebra, and optimization Deep understanding of ML and LLM development lifecycle, including fine-tuning and evaluation Expertise in feature engineering, embedding optimization, and dimensionality reduction Advanced knowledge of A/B testing, experimental design, and statistical hypothesis testing Experience with RAG systems, vector databases, and semantic search implementation Proficiency in LLM optimization techniques including quantization and knowledge distillation Understanding of MLOps practices for model deployment and Competencies : Strong analytical thinking with ability to solve complex ML challenges Excellent communication skills for presenting technical findings to diverse audiences Experience translating business requirements into data science solutions Project management skills for coordinating ML experiments and deployments Strong collaboration abilities for working with cross-functional teams Dedication to staying current with latest ML research and best practices Ability to mentor and share knowledge with team members (ref:hirist.tech)
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France