Jobs
Interviews

2043 Inference Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

30 - 35 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Experience : 6.00 + years Salary : INR 3000000-3500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Hybrid (Pune) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Daxa, Inc) (*Note: This is a requirement for one of Uplers' client - Daxa, Inc) What do you need for this opportunity? Must have skills required: CI/CD, Database Testing, Docker, Kubernetes, Cloud, Gen AI, Postman, Automation Testing, SaaS, Manual Testing Daxa, Inc is Looking for: Responsibilities Design, develop, and execute comprehensive test strategies for our AI governance SaaS platform, including functional, integration, regression, and performance testing. Test cloud-native GenAI applications, ensuring proper integration with enterprise data and AI security policies. Validate governance policies, access controls, and data compliance workflows. Automate tests using modern test automation frameworks suitable for SaaS environments (e.g., Cypress, Playwright, Selenium, or similar) Perform end-to-end testing of AI workflows—from data ingestion to model retrieval and interaction. Collaborate closely with development team to understand features, use cases, and potential vulnerabilities. Create and maintain detailed test cases, bug reports, and QA documentation. Participate in CI/CD pipeline integration for continuous testing and deployment Contribute to building a scalable, secure, and compliant QA environment for ongoing product evolution. Required Skills And Qualifications. 6+ years of experience in software testing, preferably in cloud-native SaaS or AI/ML platforms. Strong understanding of GenAI applications and common testing challenges around prompt execution, inference outputs, and compliance. Experienced in working with cloud-native applications and GenAI workflows. Experience with cloud platforms (AWS, Azure, or GCP) and containerized applications (Docker, Kubernetes). Familiarity with API testing tools like Postman or RestAssured and Database Testing with MySQL, PostgreSQL and MongoDB. Hands-on experience with test automation frameworks. Exposure to CI/CD pipelines and integration with tools like Jenkins, GitHub Actions, or GitLab CI. Understanding of governance, data lineage, access control, and enterprise security principles is a strong plus. Engagement Model:: Direct placement with the client This is hybrid role. Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Audria is a voice-first personal computing device worn discreetly behind the ear. It is designed to be proactive, understands the user’s conversations and provides the user with context-aware personal AI assistance. We’re building something fundamentally new — a privacy-first, voice-first, intelligent system that lives on your device and adapts to you. It requires a blend of machine learning , systems programming , and iOS craftsmanship . Role : iOS Engineer Salary : 12 LPA - 24 LPA fixed (Based on experience) Location : Noida (on-site), 5 days a week Joining date : As soon as possible- July What We’re Looking For Strong iOS fundamentals , especially with Swift , SwiftUI , and modern concurrency (e.g., async/await). Deep comfort working with: AVAudioEngine, Core ML, Core Audio, and low-level iOS APIs. Deep knowledge in: Real-time, low-latency pipelines involving audio , ML inference , and system integration . Familiarity with the on-device ML ecosystem : Core ML, MLX, Metal Performance Shaders (MPS), or any iOS-compatible LLM inference libraries (like llama.cpp or whisper.cpp). Appreciation for systems-level thinking : performance, battery, memory, privacy, background execution, and constraints of mobile hardware. A self-directed builder: able to define their own roadmap, evaluate tradeoffs, and optimize for user experience without needing a spec . Nice to Have Experience working on voice-first or ambient interfaces . Past work with transformers , tokenizers , or LoRA-style adapter architectures . Knowledge of audio classification , VAD , or speech recognition , especially under constrained environments. Contributions to open-source ML or mobile infrastructure projects. Experience integrating or working with C/C++ code in iOS projects (via bridging, SPM, or other methods). The main trait we are looking for when hiring is how fast you can learn new things and execute, as this field is moving fast (Apple Intelligence Foundation Models Framework). We are looking for people who keep up with the latest research and are curious to execute on something promising and challenging.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Key Responsibilities Design and develop advanced generative models including Diffusion Models, GANs, 3D VAEs, and autoregressive models for AI-powered media synthesis. Build and optimize end-to-end content generation pipelines for high-fidelity image, video, and lip-sync generation. Develop multimodal AI systems that integrate speech, video, and facial animation for hyper-realistic content output. Implement and fine-tune models for real-time performance, using techniques such as model quantization, pruning, and distillation. Collaborate with engineering teams to deploy AI models on scalable cloud platforms (AWS, GCP, Azure). Conduct rigorous experimentation to improve model accuracy, realism, and computational efficiency. Stay updated with the latest research in deep generative models, transformer-based vision systems, and multimodal AI. Participate in code reviews, maintain high standards of code quality, and document key findings to support internal knowledge sharing. Requirements Bachelor’s or Master’s degree in Computer Science, Artificial Intelligence, or a related field. Minimum 3 years of hands-on experience working with deep generative models (Diffusion Models, GANs, VAEs). Strong proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow. Proven experience in text-to-image, image-to-video, and audio-to-lip-sync model development. Solid understanding of machine learning principles, statistical modeling, and neural network architectures. Familiarity with real-time inference optimization and deployment on cloud environments (AWS, GCP, Azure). Experience working with multimodal architectures and transformer-based models like CLIP, BLIP, or GPT-4V. Contributions to open-source projects or peer-reviewed publications in generative AI, speech synthesis, or video generation are a plus. Key Skills PyTorch AWS Google Cloud Platform (GCP) Azure

Posted 1 month ago

Apply

1.0 - 4.0 years

0 Lacs

Kolkata metropolitan area, West Bengal, India

On-site

Position Overview: We are seeking a talented and motivated AI Engineer with a strong focus on Computer Vision to join our team. The ideal candidate should have a passion for solving complex problems in the field of machine learning and computer vision, as well as the technical expertise required to build and deploy AI solutions. This role is perfect for individuals with 1-4 years of relevant experience, strong programming skills in Python, and a solid understanding of computer vision frameworks and libraries. Key Responsibilities: - Design, develop, and deploy computer vision models to solve real-world problems. - Implement machine learning algorithms for image processing, object detection, segmentation, and recognition tasks. - Optimize and fine-tune models using frameworks like PyTorch and TensorFlow. - Collaborate with cross-functional teams to understand requirements and deliver efficient AI-driven solutions. - Analyze large-scale datasets to derive meaningful insights and improve model performance. - Create and maintain robust, scalable code for production-level AI systems. - Research and stay updated with the latest trends and advancements in computer vision and AI. Required Skills and Qualifications: - Educational Background: Bachelor's or Master’s degree in Computer Science, Data Science, Mathematics, or a related field. - Experience: 1-4 years of hands-on experience in computer vision and machine learning. - Programming Skills: Proficient in Python, with a strong understanding of object-oriented programming and scripting. - Frameworks: Expertise in using PyTorch and TensorFlow for model development and training. - Libraries: Proficient in popular Python libraries such as: - OpenCV-python for computer vision tasks. - Pillow for image processing. - scikit-learn for machine learning and data preprocessing. - Mathematics: Strong foundation in mathematical concepts related to machine learning and computer vision, including: - Linear algebra - Probability and statistics - Calculus - Computer Vision Algorithms: Practical experience in implementing and working with algorithms like: - Object detection - Image classification - Object segmentation - Activity recognition (optional but preferred) - Keypoint detection (optional but preferred) - Familiarity with image annotation tools and dataset preparation techniques. - Experience in version control systems like Git. Preferred Qualifications: - Knowledge of deploying models on edge devices or cloud platforms. - Familiarity with deep learning architectures like CNNs, RNNs, or GANs. - Experience with optimization techniques for model inference speed and accuracy. - Contributions to open-source projects in computer vision or machine learning. If you are passionate about applying AI to solve challenging computer vision problems and have the skills required to excel in this role, we encourage you to apply. Join us in building the next generation of intelligent systems!

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Job Title: AI Infrastructure Engineer Experience: 8+ Years Location: Onsite ( Note: The selected candidate is required to relocate to Kovilpatti, Tamil Nadu for the initial three-month project training session . Post training, the candidate will be relocated to one of our onsite locations: Chennai, Hyderabad, or Pune , based on project allocation.) Job Summary: We are looking for an experienced AI Infrastructure Engineer to architect and manage scalable, secure, and high-performance infrastructure tailored for enterprise AI and ML applications. The ideal candidate will collaborate with data scientists, DevOps, and cybersecurity teams to build reliable platforms for efficient model development, training, and deployment. Key Responsibilities: Design and implement end-to-end AI infrastructure using cloud-native tools (Azure, AWS, GCP). Build secure and scalable compute environments with GPU/TPU acceleration for model training and inference. Develop and maintain CI/CD and MLOps pipelines for the AI/ML lifecycle. Optimize large-scale AI workloads using distributed computing and hardware-aware strategies. Manage containerized deployments using orchestration platforms like Kubernetes (AKS, EKS, GKE) and Docker. Ensure system reliability, monitoring, observability, and performance tuning for real-time inference services. Implement automated rollback, logging, and infrastructure monitoring tools. Collaborate with cybersecurity teams to enforce security, data privacy, and regulatory compliance. Technical Skills: Cloud Platforms: Azure Machine Learning, AWS SageMaker, GCP Vertex AI Infrastructure-as-Code: Terraform, ARM Templates, Bicep Containerization & Orchestration: Docker, Kubernetes (AKS, EKS, GKE) MLOps Tools: MLflow, Kubeflow, Azure DevOps, GitHub Actions GPU/TPU Acceleration: CUDA, NVIDIA Triton Inference Server Security & Compliance: TLS, IAM, RBAC, Azure Key Vault Performance: Endpoint scaling, latency optimization, model caching, and resource allocation Qualifications: Bachelor's or Master's in Computer Engineering, Cloud Architecture, or a related field Microsoft Certified: Azure Solutions Architect or DevOps Engineer Expert (preferred) Proven experience deploying and managing large-scale ML pipelines and AI workloads Strong understanding of infrastructure security, networking, and cloud-based AI environments Job Type: Full-time Pay: Up to ₹80,000.00 per month Ability to commute/relocate: Tamulinadu, Tamil Nadu: Reliably commute or willing to relocate with an employer-provided relocation package (Required) Application Question(s): Expected Salary in Annual (INR) Experience: AI Infrastructure Engineer : 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Job Title: AI Infrastructure Engineer Experience: 8+ Years *Location: The selected candidate is required to work onsite at our Chennai location for the initial six-month project training and execution period. After the six months , the candidate will be offered remote opportunities.* Job Summary: We are looking for an experienced AI Infrastructure Engineer to architect and manage scalable, secure, and high-performance infrastructure tailored for enterprise AI and ML applications. The ideal candidate will collaborate with data scientists, DevOps, and cybersecurity teams to build reliable platforms for efficient model development, training, and deployment. Key Responsibilities: Design and implement end-to-end AI infrastructure using cloud-native tools (Azure, AWS, GCP). Build secure and scalable compute environments with GPU/TPU acceleration for model training and inference. Develop and maintain CI/CD and MLOps pipelines for the AI/ML lifecycle. Optimize large-scale AI workloads using distributed computing and hardware-aware strategies. Manage containerized deployments using orchestration platforms like Kubernetes (AKS, EKS, GKE) and Docker. Ensure system reliability, monitoring, observability, and performance tuning for real-time inference services. Implement automated rollback, logging, and infrastructure monitoring tools. Collaborate with cybersecurity teams to enforce security, data privacy, and regulatory compliance. Technical Skills: Cloud Platforms: Azure Machine Learning, AWS SageMaker, GCP Vertex AI Infrastructure-as-Code: Terraform, ARM Templates, Bicep Containerization & Orchestration: Docker, Kubernetes (AKS, EKS, GKE) MLOps Tools: MLflow, Kubeflow, Azure DevOps, GitHub Actions GPU/TPU Acceleration: CUDA, NVIDIA Triton Inference Server Security & Compliance: TLS, IAM, RBAC, Azure Key Vault Performance: Endpoint scaling, latency optimization, model caching, and resource allocation Qualifications: Bachelor's or Master's in Computer Engineering, Cloud Architecture, or a related field Microsoft Certified: Azure Solutions Architect or DevOps Engineer Expert (preferred) Proven experience deploying and managing large-scale ML pipelines and AI workloads Strong understanding of infrastructure security, networking, and cloud-based AI environments Job Type: Full-time Pay: Up to ₹80,000.00 per month Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Expected Salary in Annual (INR) Experience: AI Infrastructure Engineer : 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job: Product Manager Location: Bangalore/ Pune Experience: 9 Years to 15Years We are looking for a strategic and technically savvy Product Manager to lead AI-First initiatives within our AINext Platform team. This role focuses on creating scalable, impactful AI solutions by partnering closely with data scientists, engineers, and stakeholders to productize cutting-edge machine learning models and AI capabilities. As the champion of AI-first thinking, you will define and drive the roadmap for AI-powered features, infrastructure, and tools that power intelligent applications across the organization or for external customers. Responsibilities Own the product strategy for AI-first capabilities and features across the applied AI platform. Translate AI research and prototypes into production-ready products by working closely with research, data science, MLOps, and engineering teams. Define requirements for AI services , APIs, and infrastructure needed to support enterprise-scale AI use cases. Collaborate with UX, engineering, and stakeholders to prioritize AI use cases that provide real business value and measurable outcomes. Establish KPIs to measure the impact and performance of AI-driven features , and continuously optimize based on data. Evangelize an “AI-First” mindset across product and business units, helping teams adopt AI as a native capability in their products. Stay on top of the latest AI and ML trends, including foundation models, generative AI, and MLOps best practices. Drive experimentation and model validation pipelines, ensuring reliability, fairness, and explainability in deployed models. Work with responsible AI and compliance teams to ensure all AI initiatives align with ethical, privacy, and regulatory requirements . Qualifications 9+ years of product management experience, preferably in AI/ML or platform products. We have multiple roles requiring higher levels of experience. Strong understanding of machine learning, data science, and modern AI architectures , including LLMs and generative AI. Experience working with or managing AI/ML platforms, such as feature stores, model registries, MLOps pipelines, or inference services. Demonstrated ability to translate complex technical concepts into clear product requirements and business value. Excellent cross-functional communication skills, with experience working across engineering, design, research, and business teams. Familiarity with cloud platforms (AWS, GCP, Azure) and AI infrastructure tools (e.g., MLFlow, Kubeflow, Vertex AI, Databricks, etc.) Experience working in an Agile or Lean product development environment . Experience working with LLMs, embeddings, RAG pipelines, or generative AI tools (e.g., OpenAI, Hugging Face, LangChain). Prior work on B2B SaaS or internal platform products Preferred Qualifications: Technical background (Engineering, or Data Science degree preferred) with MBA preferred. What You’ll Gain: The opportunity to shape the future of AI products at an organization embracing AI-first transformation. A collaborative, forward-thinking environment with access to cutting-edge AI technologies .

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Kovilpatti, Tamil Nadu

On-site

Location: Onsite * Note : The selected candidate is required to relocate to Kovilpatti, Tamil Nadu for the initial three-month project training session . Post training, the candidate will be relocated to one of our onsite locations: Chennai, Hyderabad, or Pune , based on project allocation. Job Description The Senior AI Developer will be responsible for designing, building, training, and deploying advanced artificial intelligence and machine learning models to solve complex business challenges across industries. This role demands a strategic thinker and hands-on practitioner who can work at the intersection of data science, software engineering, and innovation. The candidate will contribute to scalable production-grade AI pipelines and mentor junior AI engineers within the Center of Excellence (CoE). Key responsibilities: · Design, train, and fine-tune deep learning models (NLP, CV, LLMs, GANs) for high-value applications · Architect AI model pipelines and implement scalable inference engines in cloud-native environments · Collaborate with data scientists, engineers, and solution architects to productionize ML prototypes · Evaluate and integrate pre-trained models like GPT-4o, Gemini, Claude, and fine-tune based on domain needs · Optimize algorithms for real-time performance, efficiency, and fairness · Write modular, maintainable code and perform rigorous unit testing and validation · Contribute to AI codebase management, CI/CD, and automated retraining infrastructure · Research emerging AI trends and propose innovative applications aligned with business objectives Technical Skills: · Expert in Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers · LLM deployment & tuning: OpenAI (GPT), Google Gemini, Claude, Falcon, Mistral · Experience with RESTful APIs, Flask/FastAPI for AI service exposure · Proficient in Azure Machine Learning, Databricks, MLflow, Docker, Kubernetes · Hands-on experience with vector databases, prompt engineering, and retrieval-augmented generation (RAG) · Knowledge of Responsible AI frameworks (bias detection, fairness, explainability) Qualification · Master’s in Artificial Intelligence, Machine Learning, Data Science, or Computer Engineering · Certifications in AI/ML (e.g., Microsoft Azure AI Engineer, Google Professional ML Engineer) preferred · Demonstrated success in building scalable AI applications in production environments · Publications or contributions to open-source AI/ML projects are a plus. Job Type: Full-time Pay: Up to ₹80,000.00 per month Location Type: In-person Ability to commute/relocate: Kovilpatti, Tamil Nadu: Reliably commute or willing to relocate with an employer-provided relocation package (Required) Application Question(s): Expected Salary in Annual (INR)? Experience: AI Developer: 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Title : Senior AI Developer Experience : 8+ Years *Location: The selected candidate is required to work onsite at our Chennai location for the initial six-month project training and execution period. After the six months , the candidate will be offered remote opportunities.* Job Description The Senior AI Developer will be responsible for designing, building, training, and deploying advanced artificial intelligence and machine learning models to solve complex business challenges across industries. This role demands a strategic thinker and hands-on practitioner who can work at the intersection of data science, software engineering, and innovation. The candidate will contribute to scalable production-grade AI pipelines and mentor junior AI engineers within the Center of Excellence (CoE). Key responsibilities: Design, train, and fine-tune deep learning models (NLP, CV, LLMs, GANs) for high-value applications Architect AI model pipelines and implement scalable inference engines in cloud-native environments Collaborate with data scientists, engineers, and solution architects to productionize ML prototypes Evaluate and integrate pre-trained models like GPT-4o, Gemini, Claude, and fine-tune based on domain needs Optimize algorithms for real-time performance, efficiency, and fairness Write modular, maintainable code and perform rigorous unit testing and validation Contribute to AI codebase management, CI/CD, and automated retraining infrastructure Research emerging AI trends and propose innovative applications aligned with business objectives Technical Skills: Expert in Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers LLM deployment & tuning: OpenAI (GPT), Google Gemini, Claude, Falcon, Mistral Experience with RESTful APIs, Flask/FastAPI for AI service exposure Proficient in Azure Machine Learning, Databricks, MLflow, Docker, Kubernetes Hands-on experience with vector databases, prompt engineering, and retrieval-augmented generation (RAG) Knowledge of Responsible AI frameworks (bias detection, fairness, explainability) Qualification Master’s in Artificial Intelligence, Machine Learning, Data Science, or Computer Engineering Certifications in AI/ML (e.g., Microsoft Azure AI Engineer, Google Professional ML Engineer) preferred Demonstrated success in building scalable AI applications in production environments Publications or contributions to open-source AI/ML projects are a plus. Job Type: Full-time Pay: Up to ₹80,000.00 per month Location Type: In-person Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Expected Salary in Annual (INR)? Experience: AI Developer: 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Echoleads.ai leverages AI-powered sales agents to engage, qualify, and convert leads through real-time voice conversations. Our voice bots act as scalable sales representatives, making thousands of smart, human-like calls daily to follow up instantly, ask the right questions, and book appointments effortlessly. Echoleads integrates seamlessly with lead sources like Meta Ads, Google Ads, and CRMs, ensuring leads are never missed. Serving modern sales and marketing teams across various industries, our AI agents proficiently handle outreach, lead qualification, and appointment setting. About the Role: We are seeking a highly experienced Voice AI /ML Engineer to lead the design and deployment of real-time voice intelligence systems. This role focuses on ASR, TTS, speaker diarization, wake word detection, and building production-grade modular audio processing pipelines to power next-generation contact center solutions, intelligent voice agents, and telecom-grade audio systems. You will work at the intersection of deep learning, streaming infrastructure, and speech/NLP technology, creating scalable, low-latency systems across diverse audio formats and real-world applications. Key Responsibilities: Voice & Audio Intelligence: Build, fine-tune, and deploy ASR models (e.g., Whisper, wav2vec2.0, Conformer) for real-time transcription. Develop and finetune high-quality TTS systems using VITS, Tacotron, FastSpeech for lifelike voice generation and cloning. Implement speaker diarization for segmenting and identifying speakers in multi-party conversations using embeddings (x-vectors/d-vectors) and clustering (AHC, VBx, spectral clustering). Design robust wake word detection models with ultra-low latency and high accuracy in noisy conditions. Real-Time Audio Streaming & Voice Agent Infrastructure: Architect bi-directional real-time audio streaming pipelines using WebSocket, gRPC, Twilio Media Streams, or WebRTC. Integrate voice AI models into live voice agent solutions, IVR automation, and AI contact center platforms. Optimize for latency, concurrency, and continuous audio streaming with context buffering and voice activity detection (VAD). Build scalable microservices to process, decode, encode, and stream audio across common codecs (e.g., PCM, Opus, μ-law, AAC, MP3) and containers (e.g., WAV, MP4). Deep Learning & NLP Architecture: Utilize transformers, encoder-decoder models, GANs, VAEs, and diffusion models, for speech and language tasks. Implement end-to-end pipelines including text normalization, G2P mapping, NLP intent extraction, and emotion/prosody control. Fine-tune pre-trained language models for integration with voice-based user interfaces. Modular System Development: Build reusable, plug-and-play modules for ASR, TTS, diarization, codecs, streaming inference, and data augmentation. Design APIs and interfaces for orchestrating voice tasks across multi-stage pipelines with format conversions and buffering. Develop performance benchmarks and optimize for CPU/GPU, memory footprint, and real-time constraints. Engineering & Deployment: Writing robust, modular, and efficient Python code Experience with Docker, Kubernetes, cloud deployment (AWS, Azure, GCP) Optimize models for real-time inference using ONNX, TorchScript, and CUDA, including quantization, context-aware inference, model caching. On device voice model deployment.

Posted 1 month ago

Apply

0.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Designation: Senior Analyst – Data Science Level: L2 Experience: 3 to 6 years Location: Chennai Job Description: We are seeking a highly skilled and motivated Senior Analyst candidate with 3-6 years of experience in Data Science to join our growing team. Responsibilities: Perform analyses on products to answer open-ended questions and provide strategic recommendations. Design and guide experiments/analysis to measure impact and drive product improvements. Develop and maintain key metrics and reports, enhancing data infrastructure for better analysis. Skills: At least a BA/BS in a quantitative field (ex Math, Stats, Physics, or Computer Science) with 2+ years of relevant experience. Key Skillsets looking for: SQL, Python, BI, Statistics, A/B testing, Data Science, Machine Learning Experience driving impact for a digital product with an iterative development cycle. Understanding of statistical concepts and practical experience applying them (in A|B testing, causal inference, ML, etc.). Experience in data analyses using SQL. Experience in programming/modeling in Python. Demonstration of our core cultural values: clear communication, positive energy, continuous learning, and efficient execution. Job Snapshot Updated Date 02-07-2025 Job ID J_3824 Location Chennai, Tamil Nadu, India Experience 3 - 6 Years Employee Type Permanent

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a skilled and motivated AI/ML Engineer with 3-5 years of experience to join our team. The ideal candidate will have hands-on expertise in building and deploying AI/ML solutions on the Azure platform, with a solid focus on Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) systems, and Azure ML Studio. You will play a key role in designing intelligent systems, deploying scalable models, and integrating advanced AI capabilities into enterprise applications. Primary Responsibilities AI/ML Development & Deployment: Design, develop, and deploy machine learning models using Azure ML Studio and Azure Machine Learning services Build and fine-tune LLM-based solutions for enterprise use cases Develop and implement RAG pipelines using Azure services and vector databases Deploy and monitor AI/ML models in production environments ensuring scalability and performance Azure Platform Engineering: Leverage Azure services such as Azure Data Lake, Azure Synapse, Azure Blob Storage, and Azure Cognitive Search for data ingestion and processing Integrate AI models with Azure-based data pipelines and APIs Use Azure DevOps for CI/CD of ML workflows and model versioning Data Engineering & Processing: Build and maintain ETL/ELT pipelines for structured and unstructured data using Databricks and Apache Spark Prepare and transform data for training and inference using Python, PySpark and SQL LLM & RAG System Implementation: Implement LLM-based agents and chatbots using frameworks like Langchain Design and optimize RAG architectures for domain-specific knowledge retrieval Work with vector databases (e.g., Azure Cognitive Search, FAISS) for embedding-based search Collaboration & Innovation: Collaborate with data scientists, product managers, and engineers to deliver AI-driven features Stay current with advancements in generative AI, LLMs, and Azure AI services Contribute to the continuous improvement of AI/ML pipelines and best practices Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so 3+ years of hands-on experience in AI/ML engineering with a focus on Azure Proven experience in deploying ML models using Azure ML Studio and Azure Machine Learning Experience working with LLMs, RAG systems, and AI agents Experience with Databricks, Apache Spark, and Azure Data services Knowledge of Azure DevOps and CI/CD for ML workflows Understanding of data governance and security in cloud environments Familiarity with MLOps practices and model monitoring tools Familiarity with vector databases and embedding models Proficiency in Python, SQL, and PySpark Proven solid analytical and problem-solving skills Proven effective communication and collaboration with cross-functional teams Proven ability to translate business requirements into technical solutions At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who We Are Zinnia is the leading technology platform for accelerating life and annuities growth. With innovative enterprise solutions and data insights, Zinnia simplifies the experience of buying, selling, and administering insurance products. All of which enables more people to protect their financial futures. Our success is driven by a commitment to three core values: be bold, team up, deliver value – and that we do. Zinnia has over $180 billion in assets under administration, serves 100+ carrier clients, 2500 distributors and partners, and over 2 million policyholders. Who You Are We are seeking a highly motivated Senior Data Analyst with strong technical expertise, business acumen, and strategic problem-solving abilities . In this role, you will independently own and drive analytics initiatives within the Operations team , translating data into actionable insights that improve efficiency, decision-making, and key business KPIs. You will work closely with stakeholders across Operations, Product, Data Engineering, and Business Strategy to identify opportunities for process optimization, automate decision-making, and create scalable analytics frameworks. This is a high-impact individual contributor role that requires both deep analytical skills and the ability to influence business strategy through data. What You’ll Do Drive analytics strategy: Independently own and drive key analytics initiatives in Operations, proactively identifying areas for efficiency improvements and cost optimization. Advanced analytics & measurement: Move beyond basic dashboards and leverage inferential modeling, causal analysis, and experimental design to generate actionable insights. Experimentation & testing: Design and implement A/B tests to measure the impact of operational improvements, optimizing key processes such as fraud detection, customer interactions, and compliance. Operational KPIs & business impact: Develop frameworks to measure Turnaround Time (TAT), Cost Per Transaction, SLA adherence, and other key operational metrics, ensuring data-driven decision-making. Data storytelling & visualization: Translate complex data insights into clear, actionable recommendations using visual storytelling techniques in Power BI and other visualization tools. Cross-functional collaboration: Work closely with stakeholders across Operations, Data Engineering, and Product to align analytics initiatives with business needs. Scalability & automation: Partner with Data Engineering to enhance data pipelines, data models, and automation efforts that improve efficiency and reduce manual work. Thought leadership & best practices: Drive data analysis best practices and mentor junior analysts, fostering a culture of analytical rigor and excellence. What You’ll Need 5+ years of experience in data analytics, with a focus on Operations, Business Strategy, or Process Optimization. Expertise in SQL, Python and with a strong ability to work with relational cloud databases (Redshift, BigQuery, Snowflake) and unstructured datasets. Experience designing A/B tests and experimentation frameworks to drive operational improvements. Strong statistical knowledge, including regression analysis, time-series forecasting, and causal inference modeling. Experience in operations analytics such as workforce efficiency, process optimization, risk modeling, and compliance analytics. Hands-on experience with data visualization tools (Power BI, Tableau, Looker) and the ability to present insights effectively to leadership. Ability to work independently, take ownership of projects, and influence business decisions through data-driven recommendations. Strong problem-solving skills and a proactive mindset to identify business opportunities using data Bonus Points If You Have Experience with ML/AI applications in operational efficiency (e.g., anomaly detection, predictive modeling, workforce automation). Familiarity with event-tracking frameworks and behavioral analytics. Strong data storytelling skills—can translate complex data into concise, compelling narratives. Prior experience in a fast-paced, high-growth environment with a focus on scaling data analytics. WHAT’S IN IT FOR YOU? At Zinnia, you collaborate with smart, creative professionals who are dedicated to delivering cutting-edge technologies, deeper data insights, and enhanced services to transform how insurance is done. Visit our website at www.zinnia.com for more information. Apply by completing the online application on the careers section of our website. We are an Equal Opportunity employer committed to a diverse workforce. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Role As a Data Scientist, you will contribute to developing and deploying innovative AI and machine learning solutions that transform insurance operations through intelligent automation. You'll work on sophisticated models for Fraud, Waste & Abuse (FWA) detection, Intelligent Document Processing (IDP), and Agentic AI systems that power autonomous insurance workflows including underwriting, claims processing, and policy servicing. This role offers excellent opportunities for growth and learning - you'll contribute to core product development, support customer implementations, and assist with proof-of-concept development under the guidance of senior team members. Working with our advanced tech stack including LLM fine-tuning, computer vision, and multi-agent systems, you'll solve complex problems that directly impact how insurance companies operate and serve their customers. As we scale from successful startup to enterprise-level positioning, you'll develop your expertise in production machine learning, learn best practices from experienced practitioners, and contribute meaningfully to AI solutions that maintain the highest standards of accuracy, reliability, and business impact. If you're passionate about applying data science to real-world insurance challenges and want to grow your career while seeing your models transform business processes, this role offers excellent opportunities for professional development. Model Development & Implementation Develop and implement machine learning models for fraud detection, risk assessment, and anomaly detection using established algorithms and frameworks under senior guidance Build computer vision and NLP models for intelligent document processing, including document classification, OCR optimization, and automated information extraction Implement and fine-tune Large Language Models (LLMs) for insurance-specific applications, applying techniques such as LoRA, QLoRA, and PEFT to create domain-specialized solutions Develop predictive models for underwriting, claims processing, and medical risk assessment that support enhanced decision-making and operational efficiency Perform feature engineering and exploratory data analysis to identify patterns and insights that inform model development and business understanding AI Agent Development & Support Contribute to the development of autonomous AI agents for handling insurance workflows with appropriate supervision and guidance Implement agent components and workflows that support multi-agent coordination and decision-making processes Build conversational AI capabilities that can interact with users and systems to gather information and provide recommendations Support the development of intelligent decision-making systems that adapt to varying business requirements and regulatory constraints Assist with the implementation of agent monitoring, evaluation, and improvement processes Data Processing & Analysis Build and maintain data pipelines for model training, validation, and inference using modern data engineering tools and practices Implement data quality checks and validation processes that ensure model inputs meet accuracy and consistency requirements Perform comprehensive data analysis to understand data characteristics, identify quality issues, and inform model development decisions Create data visualizations and analytical reports that communicate insights to technical and business stakeholders Work with diverse data sources including structured insurance data, unstructured documents, and external data feeds Model Deployment & Monitoring Support Support model deployment processes from development through production, working with MLOps teams to ensure smooth transitions Implement model monitoring and performance tracking capabilities that detect issues and support continuous improvement Contribute to A/B testing and model evaluation frameworks that enable data-driven decisions about model performance Develop model documentation and explainability features that support business understanding and regulatory compliance Participate in model maintenance and improvement activities based on production performance and business feedback Collaboration & Learning Work closely with senior data scientists, solution architects, and software engineers to deliver high-quality technical solutions Support customer-facing activities by adapting models for specific client needs and contributing to implementation success Contribute to proof-of-concept development and technical demonstrations that Showcase Our Capabilities To Prospective Customers Participate in code reviews, technical discussions, and knowledge sharing activities that support team learning and best practices Stay current with emerging trends and techniques in AI/ML, particularly in areas relevant to insurance and autonomous agents Qualifications & Experience Academic Qualifications Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, Engineering, or related quantitative field from a recognized institution Coursework or training in machine learning, statistics, or data analysis preferred Professional Experience 3+ years of hands-on experience in data science, machine learning, or related analytical roles with demonstrated ability to develop and deploy ML models Experience with machine learning project lifecycle from data exploration through model deployment Background in applying statistical methods and machine learning algorithms to solve business problems Exposure to production ML systems, model deployment, or MLOps practices Preferred Insurance industry exposure preferred, with understanding of claims processing, underwriting, medical risk assessment, or fraud detection workflows Technical Skills Strong proficiency in Python with experience using Scikit-Learn, Pandas, NumPy, and Jupyter for data science workflows Experience with machine learning frameworks such as TensorFlow, PyTorch, or similar libraries for model development Basic to intermediate knowledge of LLM fine-tuning, natural language processing, or computer vision techniques. Understanding of embeddings, TF-IDF, and linguistic modelling. Familiarity with OCR technologies, document processing, or computer vision libraries such as OpenCV Experience with GenAI frameworks including LangChain, Hugging Face Transformers, or vector databases Knowledge of data manipulation and analysis using SQL, big data tools, or distributed computing frameworks Understanding of MLOps concepts, version control systems, and basic cloud platform usage Communication & Collaboration Skills Strong analytical and problem-solving skills with ability to approach complex business problems systematically Good communication abilities for presenting technical findings to team members and stakeholders Collaborative mindset with willingness to learn from senior team members and contribute to team success Attention to detail and commitment to producing high-quality, reliable analytical work Enthusiasm for learning new technologies and staying current with developments in AI/ML and insurance technology How We Get Things Done At our core, we believe in building intelligent systems that transform how insurance works. We're a team of innovators who combine deep technical expertise with real-world business impact. Our guiding principles center around technical excellence, data-driven decision making, and the responsible development of AI systems that enhance human capabilities. We foster a culture of continuous learning, experimentation, and cross-functional collaboration. As we scale from startup to enterprise, we maintain our commitment to cutting-edge research while building solutions that can handle the most demanding insurance workflows globally. Championing Inclusion & Innovation We embrace the opportunity to build a team that reflects diverse perspectives and experiences. Being an equal opportunity employer means we consider qualified candidates based on merit, technical capabilities, and cultural fit, regardless of background, identity, or personal characteristics. We're committed to creating an inclusive environment where innovative thinking thrives and where every team member can contribute to our mission of transforming insurance through intelligent automation. Skills: ml,processing,python,sql,pandas,pytorch,insurance,numpy,ocr technologies,data,machine learning,data science,learning,genai frameworks,scikit-learn,tensorflow,computer vision,data analysis,natural language processing,mlops,jupyter

Posted 1 month ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role As a Data Scientist, you will contribute to developing and deploying innovative AI and machine learning solutions that transform insurance operations through intelligent automation. You'll work on sophisticated models for Fraud, Waste & Abuse (FWA) detection, Intelligent Document Processing (IDP), and Agentic AI systems that power autonomous insurance workflows including underwriting, claims processing, and policy servicing. This role offers excellent opportunities for growth and learning - you'll contribute to core product development, support customer implementations, and assist with proof-of-concept development under the guidance of senior team members. Working with our advanced tech stack including LLM fine-tuning, computer vision, and multi-agent systems, you'll solve complex problems that directly impact how insurance companies operate and serve their customers. As we scale from successful startup to enterprise-level positioning, you'll develop your expertise in production machine learning, learn best practices from experienced practitioners, and contribute meaningfully to AI solutions that maintain the highest standards of accuracy, reliability, and business impact. If you're passionate about applying data science to real-world insurance challenges and want to grow your career while seeing your models transform business processes, this role offers excellent opportunities for professional development. Model Development & Implementation Develop and implement machine learning models for fraud detection, risk assessment, and anomaly detection using established algorithms and frameworks under senior guidance Build computer vision and NLP models for intelligent document processing, including document classification, OCR optimization, and automated information extraction Implement and fine-tune Large Language Models (LLMs) for insurance-specific applications, applying techniques such as LoRA, QLoRA, and PEFT to create domain-specialized solutions Develop predictive models for underwriting, claims processing, and medical risk assessment that support enhanced decision-making and operational efficiency Perform feature engineering and exploratory data analysis to identify patterns and insights that inform model development and business understanding AI Agent Development & Support Contribute to the development of autonomous AI agents for handling insurance workflows with appropriate supervision and guidance Implement agent components and workflows that support multi-agent coordination and decision-making processes Build conversational AI capabilities that can interact with users and systems to gather information and provide recommendations Support the development of intelligent decision-making systems that adapt to varying business requirements and regulatory constraints Assist with the implementation of agent monitoring, evaluation, and improvement processes Data Processing & Analysis Build and maintain data pipelines for model training, validation, and inference using modern data engineering tools and practices Implement data quality checks and validation processes that ensure model inputs meet accuracy and consistency requirements Perform comprehensive data analysis to understand data characteristics, identify quality issues, and inform model development decisions Create data visualizations and analytical reports that communicate insights to technical and business stakeholders Work with diverse data sources including structured insurance data, unstructured documents, and external data feeds Model Deployment & Monitoring Support Support model deployment processes from development through production, working with MLOps teams to ensure smooth transitions Implement model monitoring and performance tracking capabilities that detect issues and support continuous improvement Contribute to A/B testing and model evaluation frameworks that enable data-driven decisions about model performance Develop model documentation and explainability features that support business understanding and regulatory compliance Participate in model maintenance and improvement activities based on production performance and business feedback Collaboration & Learning Work closely with senior data scientists, solution architects, and software engineers to deliver high-quality technical solutions Support customer-facing activities by adapting models for specific client needs and contributing to implementation success Contribute to proof-of-concept development and technical demonstrations that Showcase Our Capabilities To Prospective Customers Participate in code reviews, technical discussions, and knowledge sharing activities that support team learning and best practices Stay current with emerging trends and techniques in AI/ML, particularly in areas relevant to insurance and autonomous agents Qualifications & Experience Academic Qualifications Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, Engineering, or related quantitative field from a recognized institution Coursework or training in machine learning, statistics, or data analysis preferred Professional Experience 3+ years of hands-on experience in data science, machine learning, or related analytical roles with demonstrated ability to develop and deploy ML models Experience with machine learning project lifecycle from data exploration through model deployment Background in applying statistical methods and machine learning algorithms to solve business problems Exposure to production ML systems, model deployment, or MLOps practices Preferred Insurance industry exposure preferred, with understanding of claims processing, underwriting, medical risk assessment, or fraud detection workflows Technical Skills Strong proficiency in Python with experience using Scikit-Learn, Pandas, NumPy, and Jupyter for data science workflows Experience with machine learning frameworks such as TensorFlow, PyTorch, or similar libraries for model development Basic to intermediate knowledge of LLM fine-tuning, natural language processing, or computer vision techniques. Understanding of embeddings, TF-IDF, and linguistic modelling. Familiarity with OCR technologies, document processing, or computer vision libraries such as OpenCV Experience with GenAI frameworks including LangChain, Hugging Face Transformers, or vector databases Knowledge of data manipulation and analysis using SQL, big data tools, or distributed computing frameworks Understanding of MLOps concepts, version control systems, and basic cloud platform usage Communication & Collaboration Skills Strong analytical and problem-solving skills with ability to approach complex business problems systematically Good communication abilities for presenting technical findings to team members and stakeholders Collaborative mindset with willingness to learn from senior team members and contribute to team success Attention to detail and commitment to producing high-quality, reliable analytical work Enthusiasm for learning new technologies and staying current with developments in AI/ML and insurance technology How We Get Things Done At our core, we believe in building intelligent systems that transform how insurance works. We're a team of innovators who combine deep technical expertise with real-world business impact. Our guiding principles center around technical excellence, data-driven decision making, and the responsible development of AI systems that enhance human capabilities. We foster a culture of continuous learning, experimentation, and cross-functional collaboration. As we scale from startup to enterprise, we maintain our commitment to cutting-edge research while building solutions that can handle the most demanding insurance workflows globally. Championing Inclusion & Innovation We embrace the opportunity to build a team that reflects diverse perspectives and experiences. Being an equal opportunity employer means we consider qualified candidates based on merit, technical capabilities, and cultural fit, regardless of background, identity, or personal characteristics. We're committed to creating an inclusive environment where innovative thinking thrives and where every team member can contribute to our mission of transforming insurance through intelligent automation. Skills: ml,processing,python,sql,pandas,pytorch,insurance,numpy,ocr technologies,data,machine learning,data science,learning,genai frameworks,scikit-learn,tensorflow,computer vision,data analysis,natural language processing,mlops,jupyter

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Summary JOB DESCRIPTION The Sr Financial Analyst will exhibit genuine interest in solving work problems through proactively asking questions, clearly communicating and collaborating both internally and externally to grow the business. In addition, the Financial Analyst is responsible for understanding our competition and customers and showing initiative to learn and continuously improve company processes. The Sr Financial Analyst will be responsible for long term planning, financial analysis and business analysis. The Sr Financial Analyst will play an integral role in the success of the company and our clients. Duties & Responsibilities Listed in order of relevance: Financial Planning and Analysis: Develop financial statement forecasts, create and maintain financial dashboards, and perform financial analysis to support decision-making. Operational Excellence: Ensure internal controls are in place to protect the global asset base, meet enterprise planning and financial reporting requirements on a US GAAP basis, and lead fiduciary roles Business Forecasting: Develop and maintain automated forecasting processes, scorecard forecasting processes, and implement enhancements to deliver targeted precision Reporting and Analysis: Provide critical insights, serve as a financial advisor to finance leaders, and establish and maintain data integrity. Month-End Close Process: Manage the month-end close process, including reviewing ISC Key Metrics and summaries at various levels of operations. Inventory Management: Establish a consistent methodology for determining if costs contribute to the production of inventory and performing variance analysis vs AOP Standardized Reporting: Deploy standardized and digitized reporting, connect ISC EDW and Digital Finance for holistic reporting, and create standard FDS smart view pulls to validate data. Execute ISC Finance objectives, drive results, hold others accountable to their commitments, and maintain an operational excellence culture Investigate / resolve any issues or errors related to the finance side of the ERP system Month end financial responsibilities (JE’s, Accrual entries, etc) with USA entity Responsible for updating/monitoring standard costs each year for all entities and providing year-end Inventory reports Respond to change productively and handle other duties as required. Follow all company safety policies and procedures. Education & Experience Strong Excel, PowerPoint and Access skills, with particular focus on financial modeling and the use of advanced features of spreadsheets. Experience with financial modeling. High level of analytical ability and accuracy. Highly organized and ability to work on multiple projects at once. Ability to work independently and complete tasks with minimal supervision. Excellent interpersonal and organization skills required. A bachelor’s degree in accounting, finance or a related field plus 5 years or more of job-related experience; the equivalent combination of training and experience may be suitable; a CPA and/or MBA is preferred. Responsibilities KNOWLEDGE & SKILLS Ability to read, analyze, and interpret general business periodicals, professional journals, technical procedures, or governmental regulations. Ability to write reports, business correspondence, and procedure manuals. Ability to effectively present information and respond to questions from groups of managers, clients, customers, and the general public. Ability to work with mathematical concepts such as probability and statistical inference, and fundamentals of plane and solid geometry and trigonometry. Ability to define problems, collect data, establish facts, and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables. Proficiency in MS Office applications (MS Word, MS Excel, MS Access, MS PowerPoint, MS Outlook). Ability to read, speak, and write in English required. About Us Honeywell helps organizations solve the world's most complex challenges in automation, the future of aviation and energy transition. As a trusted partner, we provide actionable solutions and innovation through our Aerospace Technologies, Building Automation, Energy and Sustainability Solutions, and Industrial Automation business segments – powered by our Honeywell Forge software – that help make the world smarter, safer and more sustainable.

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Mandatory Skills - Gen-AI, Data Science, Python, RAG and Cloud (AWS/Azure) Secondary - Machine Learning, Deep Learning, ChatGPT, Langchain, Prompt, vector stores, RAG, llama, Computer vision, Deep learning, Machine learning, OCR, Transformer, regression, forecasting, classification, hyper parameter tunning, MLOps, Inference, Model training, Model Deployment. Job Description More than 6 years of experience in Data Engineering, Data Science and AI / ML domain Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience using business intelligence tools (e.g. Tableau, PowerBI) and data frameworks (e.g. Hadoop) Experience in Cloud native skills. Knowledge of SQL and Python; familiarity with Scala, Java or C++ is an asset Analytical mind and business acumen and Strong math skills (e.g. statistics, algebra) Experience with common data science toolkits, such as TensorFlow, KERAs, PyTorch, PANDAs, Microsoft CNTK, NumPy etc. Deep expertise in at least one of these is highly desirable. Experience with NLP, NLG and Large Language Models like – BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Great communication and presentation skills. Should have experience in working in a fast-paced team culture. Experience with AIML and Big Data technologies like – AWS SageMaker, Azure Cognitive Services, Google Colab, Jupyter Notebook, Hadoop, PySpark, HIVE, AWS EMR etc. Experience with NoSQL databases, such as MongoDB, Cassandra, HBase, Vector databases Good understanding of applied statistics skills, such as distributions, statistical testing, regression, etc. Should be a data-oriented person with analytical mind and business acumen.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a highly skilled and visionary Senior Embedded Systems Architect to lead the design and implementation of next-generation AI-powered embedded platforms. This role demands deep technical proficiency across embedded systems, AI model deployment, hardware–software co-design, and media-centric inference pipelines. You will architect full-stack embedded AI solutions using custom AI accelerators such as Google Coral (Edge TPU), Hailo, BlackHole (Torrent), and Kendryte, delivering real-time performance in vision, audio, and multi-sensor edge deployments. The ideal candidate brings a combination of system-level thinking, hands-on prototyping, and experience in optimizing AI workloads for edge inference. This is a high-impact role where you will influence product architecture, ML tooling, hardware integration, and platform scalability for a range of IoT and intelligent device applications. Requirements Key Responsibilities ️ System Architecture & Design Define and architect complete embedded systems for AI workloads — from sensor acquisition to real-time inference and actuation . Design multi-stage pipelines for vision/audio inference: e.g., ISP preprocessing CNN inference postprocessing. Evaluate and benchmark hardware platforms with AI accelerators (TPU/NPU/DSP) for latency, power, and throughput. Edge AI & Accelerator Integration Work with Coral, Hailo, Kendryte, Movidius, and Torrent accelerators using their native SDKs (EdgeTPU Compiler, HailoRT, etc.). Translate ML models (TensorFlow, PyTorch, ONNX) for inference on edge devices using cross-compilation , quantization , and toolchain optimization . Lead efforts in compiler flows such as TVM, XLA, Glow, and custom runtime engines. ️ Media & Sensor Processing Pipelines Architect pipelines involving camera input , ISP tuning , video codecs , audio preprocessors , or sensor fusion stacks . Integrate media frameworks such as V4L2 , GStreamer , and OpenCV into real-time embedded systems. Optimize for frame latency, buffering, memory reuse, and bandwidth constraints in edge deployments. ️ Embedded Firmware & Platform Leadership Lead board bring-up, firmware development (RTOS/Linux), peripheral interface integration, and low-power system design. Work with engineers across embedded, AI/ML, and cloud to build robust, secure, and production-ready systems. Review schematics and assist with hardware–software trade-offs, especially around compute, thermal, and memory design. Required Qualifications ‍ Education: BE/B.Tech/M.Tech in Electronics, Electrical, Computer Engineering, Embedded Systems, or related fields. Experience: Minimum 5+ years of experience in embedded systems design. Minimum 3 years of hands-on experience with AI accelerators and ML model deployment at the edge. Technical Skills Required Embedded System Design Strong C/C++, embedded Linux, and RTOS-based development experience. Experience with SoCs and MCUs such as STM32, ESP32, NXP, RK3566/3588, TI Sitara, etc. Cross-architecture familiarity: ARM Cortex-A/M, RISC-V, DSP cores. ML & Accelerator Toolchains Proficiency with ML compilers and deployment toolchains: ONNX, TFLite, HailoRT, EdgeTPU compiler, TVM, XLA . Experience with quantization , model pruning , compiler graphs , and hardware-aware profiling . Media & Peripherals Integration experience with camera modules , audio codecs , IMUs , and other digital/analog sensors . Experience with V4L2 , GStreamer , OpenCV , MIPI CSI , and ISP tuning is highly desirable. System Optimization Deep understanding of compute budgeting , thermal constraints , memory management , DMA , and low-latency pipelines . Familiarity with debugging tools: JTAG , SWD , logic analyzers , oscilloscopes , perf counters , and profiling tools. Preferred (Bonus) Skills Experience with Secure Boot , TPM , Encrypted Model Execution , or Post-Quantum Cryptography (PQC) . Familiarity with safety standards like IEC 61508 , ISO 26262 , UL 60730 . Contributions to open-source ML frameworks or embedded model inference libraries. Why Join Us? At EURTH TECHTRONICS PVT LTD , you won't just be optimizing firmware — you will architect full-stack intelligent systems that push the boundary of what's possible in embedded AI. Work on production-grade, AI-powered devices for industrial, consumer, defense, and medical applications . Collaborate with a high-performance R&D team that builds edge-first, low-power, secure, and scalable systems . Drive core architecture and set the technology direction for a fast-growing, innovation-focused organization. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 8 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are looking for a hands-on and technically proficient Embedded Software Team Lead to drive the development of intelligent edge systems that combine embedded firmware, machine learning inference, and hardware acceleration. This role is perfect for someone who thrives at the intersection of real-time firmware design, AI model deployment, and hardware-software co-optimization. You will lead a team delivering modular, scalable, and efficient firmware pipelines that run quantized ML models on accelerators like Hailo, Coral, Torrent (BlackHole), Kendryte, and other emerging chipsets. Your focus will include model runtime integration, low-latency sensor processing, OTA-ready firmware stacks, and CI/CD pipelines for embedded products at scale Requirements Key Responsibilities Technical Leadership & Planning Own the firmware lifecycle across multiple AI-based embedded product lines. Define system and software architecture in collaboration with hardware, ML, and cloud teams. Lead sprint planning, code reviews, performance debugging, and mentor junior engineers. ️ ML Model Deployment & Runtime Integration Collaborate with ML engineers to port, quantize, and deploy models using TFLite , ONNX , or HailoRT . Build runtime pipelines that connect model inference with real-time sensor data (vision, IMU, acoustic). Optimize memory and compute flows for edge model execution under power/bandwidth constraints. Firmware Development & Validation Build production-grade embedded stacks using RTOS (FreeRTOS/Zephyr) or embedded Linux . Implement secure bootloaders, OTA update mechanisms, and encrypted firmware interfaces. Interface with a variety of peripherals including cameras, IMUs, analog sensors, and radios (BLE/Wi-Fi/LoRa). ️ CI/CD, DevOps & Tooling for Embedded Set up and manage CI/CD pipelines for firmware builds, static analysis, and validation. Integrate Docker-based toolchains, hardware-in-loop (HIL) testing setups, and simulators/emulators. Ensure codebase quality, maintainability, and test coverage across the embedded stack. Required Qualifications ‍ Education: BE/B.Tech/M.Tech in Embedded Systems, Electronics, Computer Engineering, or related fields. Experience: Minimum 4+ years of embedded systems experience. Minimum 2 years in a technical lead or architect role. Hands-on experience in ML model runtime optimization and embedded system integration. Technical Skills Required Embedded Development & Tools Expert-level C/C++ , hands-on with RTOS and Yocto-based Linux . Proficient with toolchains like GCC/Clang, OpenOCD, JTAG/SWD, Logic Analyzers. Familiarity with OTA , bootloaders , and memory management (heap/stack analysis, linker scripts). ML Model Integration Proficiency in TFLite , ONNX Runtime , HailoRT , or EdgeTPU runtimes . Experience with model conversion, quantization (INT8, FP16), runtime optimization. Ability to read/modify model graphs and connect to inference APIs. Connectivity & Peripherals Working knowledge of BLE, Wi-Fi, LoRa, RS485 , USB, and CAN protocols. Integration of camera modules , MIPI CSI , IMUs , and custom analog sensors . ️ DevOps for Embedded Hands-on with GitLab/GitHub CI, Docker, and containerized embedded builds. Build system expertise: CMake , Make , Bazel , or Yocto preferred. Experience in automated firmware testing (HIL, unit, integration). Preferred (Bonus) Skills Familiarity with machine vision pipelines , ISP tuning , or video/audio codec integration . Prior work on battery-operated devices , energy-aware scheduling , or deep sleep optimization . Contributions to embedded ML open-source projects or model deployment tools. Why Join Us? At EURTH TECHTRONICS PVT LTD , we go beyond firmware—we’re designing and deploying embedded intelligence on every device, from industrial gateways to smart consumer wearables. Build and lead teams working on cutting-edge real-time firmware + ML integration . Work on full-stack embedded ML systems using the latest AI accelerators and embedded chipsets . Drive product-ready, scalable software platforms that power IoT, defense, medical , and consumer electronics . How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.

Posted 1 month ago

Apply

2.0 years

2 - 8 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Job Type Full Time About the Role About the Role We are seeking a passionate and skilled Embedded ML Engineer to work on cutting-edge ML inference pipelines for low-power, real-time embedded platforms. You will help design and deploy highly efficient ML models on custom hardware accelerators like Hailo, Coral (Edge TPU), Kendryte K210, and Torrent/BlackHole in real-world IoT systems. This role combines model optimization, embedded firmware development, and toolchain management. You will be responsible for translating large ML models into efficient quantized versions, benchmarking them on custom hardware, and integrating them with embedded firmware pipelines that interact with real-world sensors and peripherals. Requirements Key Responsibilities ML Model Optimization & Conversion Convert, quantize, and compile models built in TensorFlow, PyTorch , or ONNX to hardware-specific formats. Work with compilers and deployment frameworks like TFLite , HailoRT , EdgeTPU Compiler , TVM , or ONNX Runtime . Use techniques such as post-training quantization , pruning , distillation , and model slicing . ️ Embedded Integration & Inference Deployment Integrate ML runtimes in C/C++ or Python into firmware stacks built on RTOS or embedded Linux . Handle real-time sensor inputs (camera, accelerometer, microphone) and pass them through inference engines. Manage memory, DMA transfers, inference buffers, and timing loops for deterministic behavior. Benchmarking & Performance Tuning Profile and optimize models for latency, memory usage, compute load , and power draw . Work with runtime logs, inference profilers, and vendor SDKs to squeeze maximum throughput on edge hardware. Conduct accuracy vs performance trade-off studies for different model variants. Testing & Validation Design unit, integration, and hardware-in-loop (HIL) tests to validate model execution on actual devices. Collaborate with hardware and firmware teams to debug runtime crashes, inference failures, and edge cases. Build reproducible benchmarking scripts and test data pipelines. Required Qualifications ‍ Education: BE/B.Tech/M.Tech in Electronics, Embedded Systems, Computer Science, or related disciplines. Experience: 2–4 years in embedded ML, edge AI, or firmware development with ML inference integration. Technical Skills Required Embedded Firmware & Runtime Strong experience in C/C++ , basic Python scripting. Experience with RTOS (FreeRTOS, Zephyr) or embedded Linux. Understanding of memory-mapped I/O, ring buffers, circular queues, and real-time execution cycles. ML Model Toolchains Experience with TensorFlow Lite , ONNX Runtime , HailoRT , EdgeTPU , uTensor , or TinyML . Knowledge of quantization-aware training or post-training quantization techniques. Familiarity with model conversion pipelines and hardware-aware model profiling. Media & Sensor Stack Ability to work with input/output streams from cameras , IMUs , microphones , etc. Experience integrating inference with V4L2, GStreamer, or custom ISP preprocessors is a plus. Tooling & Debugging Git, Docker, cross-compilation toolchains (Yocto, CMake). Debugging with SWD/JTAG, GDB, or serial console-based logging. Profiling with memory maps, timing charts, and inference logs. Preferred (Bonus) Skills Previous work with low-power vision devices , audio keyword spotting , or sensor fusion ML . Familiarity with edge security (encrypted models, secure firmware pipelines). Hands-on with simulators/emulators for ML testing (Edge Impulse, Hailo’s HEF emulator, etc.). Participation in TinyML forums , open-source ML toolkits, or ML benchmarking communities. Why Join Us? At EURTH TECHTRONICS PVT LTD , we're not just building IoT firmware—we're deploying machine learning intelligence on ultra-constrained edge platforms , powering real-time decisions at the edge. Get exposure to full-stack embedded ML pipelines — from model quantization to runtime integration. Work with a world-class team focused on ML efficiency, power optimization, and embedded system scalability .️ Contribute to mission-critical products used in industrial automation, medical wearables, smart infrastructure , and more. How to Apply Send your updated resume + GitHub/portfolio links to: jobs@eurthtech.com About the Company About EURTH TECHTRONICS PVT LTD EURTH TECHTRONICS PVT LTD is a cutting-edge Electronics Product Design and Engineering firm specializing in embedded systems, IoT solutions, and high-performance hardware development. We provide end-to-end product development services—from PCB design, firmware development, and system architecture to manufacturing and scalable deployment. With deep expertise in embedded software, signal processing, AI-driven edge computing, RF communication, and ultra-low-power design, we build next-generation industrial automation, consumer electronics, and smart infrastructure solutions. Our Core Capabilities Embedded Systems & Firmware Engineering – Architecting robust, real-time embedded solutions with RTOS, Linux, and MCU/SoC-based firmware. IoT & Wireless Technologies – Developing LoRa, BLE, Wi-Fi, UWB, and 5G-based connected solutions for industrial and smart city applications. Hardware & PCB Design – High-performance PCB layout, signal integrity optimization, and design for manufacturing (DFM/DFA). Product Prototyping & Manufacturing – Accelerating concept-to-market with rapid prototyping, design validation, and scalable production. AI & Edge Computing – Implementing real-time AI/ML on embedded devices for predictive analytics, automation, and security. Security & Cryptography – Integrating post-quantum cryptography, secure boot, and encrypted firmware updates. Our Industry Impact ✅ IoT & Smart Devices – Powering the next wave of connected solutions for industrial automation, logistics, and smart infrastructure. ✅ Medical & Wearable Tech – Designing low-power biomedical devices with precision sensor fusion and embedded intelligence. ✅ Automotive & Industrial Automation – Developing AI-enhanced control systems, predictive maintenance tools, and real-time monitoring solutions. ✅ Scalable Enterprise & B2B Solutions – Delivering custom embedded hardware and software tailored to OEMs, manufacturers, and system integrators. Our Vision We are committed to advancing technology and innovation in embedded product design. With a focus on scalability, security, and efficiency, we empower businesses with intelligent, connected, and future-ready solutions. We currently cater to B2B markets, offering customized embedded development services, with a roadmap to expand into direct-to-consumer (B2C) solutions.

Posted 1 month ago

Apply

2.0 - 6.0 years

1 - 3 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

4.0 - 8.0 years

1 - 3 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 10 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team, you will design, build, and optimize enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. You’ll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. What’s in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical Requirements: 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestration: Docker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworks: Celery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practices: unit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317427 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply

10.0 years

1 - 6 Lacs

Thiruvananthapuram

On-site

We are seeking a visionary and highly skilled AI Architect to join our leadership team. This pivotal role will be responsible for defining and implementing the end-to-end architecture for deploying our machine learning models, including advanced Generative AI and LLM solutions, into production. You will lead and mentor a talented cross-functional team of Data Scientists, Backend Developers, and DevOps Engineers, fostering a culture of innovation, technical excellence, and operational efficiency. Key responsibilities: Architectural Leadership: Design, develop, and own the scalable, secure, and reliable end-to-end architecture for deploying and serving ML models, with a strong focus on real-time inference and high availability. Lead the strategy and implementation of the in-house API wrapper infrastructure for exposing ML models to internal and external customers. Define architectural patterns, best practices, and governance for MLOps, ensuring robust CI/CD pipelines, model versioning, monitoring, and automated retraining. Evaluate and select the optimal technology stack (cloud services, open-source frameworks, tools) for our ML serving infrastructure, balancing performance, cost, and maintainability. Team Leadership & Mentorship: Lead, mentor, and inspire a diverse team of Data Scientists, Backend Developers, and DevOps Engineers, guiding them through complex architectural decisions and technical challenges. Foster a collaborative environment that encourages knowledge sharing, continuous learning, and innovation across teams. Drive technical excellence, code quality, and adherence to engineering best practices within the teams. Generative AI & LLM Expertise: Architect and implement solutions for deploying Large Language Models (LLMs), including strategies for efficient inference, prompt engineering, and context management. Drive the adoption and integration of techniques like Retrieval Augmented Generation (RAG) to enhance LLM capabilities with proprietary and up-to-date information. Develop strategies for fine-tuning LLMs for specific downstream tasks and domain adaptation, ensuring efficient data pipelines and experimentation frameworks. Stay abreast of the latest advancements in AI , particularly in Generative AI, foundation models, and emerging MLOps tools, and evaluate their applicability to our business needs. Collaboration & Cross-Functional Impact: Collaborate closely with Data Scientists to understand model requirements, optimize models for production, and integrate them seamlessly into the serving infrastructure. Partner with Backend Developers to build robust, secure, and performant APIs that consume and serve ML predictions. Work hand-in-hand with DevOps Engineers to automate deployment, monitoring, scaling, and operational excellence of the AI infrastructure. Communicate complex technical concepts and architectural decisions effectively to both technical and non-technical stakeholders. Requirements (Qualifications/Experience/Competencies) Education: Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related quantitative field. Experience: 10+ years of progressive experience in software engineering, with at least 5+ years in an Architect or Lead role. Proven experience leading and mentoring cross-functional engineering teams (Data Scientists, Backend Developers, DevOps). Demonstrated experience in designing, building, and deploying scalable, production-grade ML model serving infrastructure from the ground up. Technical Skills: Deep expertise in MLOps principles and practices , including model versioning, serving, monitoring, and CI/CD for ML. Strong proficiency in Python and experience with relevant web frameworks (e.g., FastAPI, Flask ) for API development. Expertise in containerization technologies (Docker) and container orchestration (Kubernetes) for large-scale deployments. Hands-on experience with at least one major cloud platform (AWS, Google Cloud, Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, Azure ML). Demonstrable experience with Large Language Models (LLMs) , including deployment patterns, prompt engineering, and fine-tuning methodologies. Practical experience implementing and optimizing Retrieval Augmented Generation (RAG) systems. Familiarity with distributed systems, microservices architecture, and API design best practices. Experience with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack, Datadog). Knowledge of infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation). Leadership & Soft Skills: Exceptional leadership, mentorship, and team-building abilities. Strong analytical and problem-solving skills, with a track record of driving complex technical initiatives to completion. Excellent communication (verbal and written) and interpersonal skills, with the ability to articulate technical concepts to diverse audiences. Strategic thinker with the ability to align technical solutions with business objectives. Proactive, self-driven, and continuously learning mindset. Bonus Points: Experience with specific ML serving frameworks like BentoML, KServe/KFServing, TensorFlow Serving, TorchServe, or NVIDIA Triton Inference Server. Contributions to open-source MLOps or AI projects. Experience with data governance, data security, and compliance in an AI context.

Posted 1 month ago

Apply

8.0 - 12.0 years

6 - 9 Lacs

Gurgaon

On-site

About the Role: Grade Level (for internal use): 12 The Team: As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will spearhead the design and delivery of robust, scalable ML infrastructure and pipelines that power natural language understanding, data extraction, information retrieval, and data sourcing solutions for S&P Global. You will define AI/ML engineering best practices, mentor fellow engineers and data scientists, and drive production-ready AI products from ideation through deployment. You’ll thrive in a (truly) global team that values thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and build solutions at enterprise scale Lead and grow a technically strong ML engineering function Collaborate on and solve high-complexity, high-impact problems Shape the engineering roadmap for emerging AI/ML capabilities (including GenAI integrations) Key Responsibilities: Architect, develop, and maintain production-ready data acquisition, transformation, and ML pipelines (batch & streaming) Serve as a hands-on lead-writing code, conducting reviews, and troubleshooting to extend and operate our data platforms Apply best practices in data modeling, ETL design, and pipeline orchestration using cloud-native solutions Establish CI/CD and MLOps workflows for model training, validation, deployment, monitoring, and rollback Integrate GenAI components-LLM inference endpoints, embedding stores, prompt services-into broader ML systems Mentor and guide engineers and data scientists; foster a culture of craftsmanship and continuous improvement Collaborate with cross-functional stakeholders (Data Science, Product, IT) to align on requirements, timelines, and SLAs What We’re Looking For: 8-12 years' professional software engineering experience with a strong MLOps focus Expert in Python and Apache for large-scale data processing Deep experience deploying and operating ML pipelines on AWS or GCP Hands-on proficiency with container/orchestration tooling Solid understanding of the full ML model lifecycle and CI/CD principles Skilled in streaming and batch ETL design (e.g., Airflow, Dataflow) Strong OOP design patterns, Test-Driven Development, and enterprise system architecture Advanced SQL skills (big-data variants a plus) and comfort with Linux/bash toolsets Familiarity with version control (Git, GitHub, or Azure DevOps) and code review processes Excellent problem-solving, debugging, and performance-tuning abilities Ability to communicate technical change clearly to non-technical audiences Nice to have: Redis, Celery, SQS and Lambda based event driven pipelines Prior work integrating LLM services (OpenAI, Anthropic, etc.) at scale Experience with Apache Avro and Apache Familiarity with Java and/or .NET Core (C#) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group) Job ID: 317386 Posted On: 2025-06-30 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies