Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less
Posted 13 hours ago
2.0 - 6.0 years
5 - 11 Lacs
India
On-site
We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2 - 6 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person
Posted 13 hours ago
3.0 years
0 - 0 Lacs
Calicut
On-site
Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 3+ years of experience in AI/ML development and deployment. Proficient in Python and familiar with libraries like TensorFlow, PyTorch, Scikit-learn, Pandas, and NumPy. Strong understanding of machine learning algorithms (supervised, unsupervised, deep learning). Experience with cloud platforms (AWS, Azure, GCP) and MLOps tools (MLflow, Airflow, Docker, Kubernetes). Solid understanding of data structures, algorithms, and software engineering principles. Experience with RESTful APIs and integrating AI models into production environments. Job Type: Full-time Pay: ₹35,000.00 - ₹60,000.00 per month Benefits: Internet reimbursement Paid sick time Provident Fund Schedule: Fixed shift Work Location: In person
Posted 13 hours ago
3.0 - 7.0 years
7 - 16 Lacs
Hyderābād
On-site
AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 13 hours ago
5.0 years
0 Lacs
India
On-site
This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less
Posted 13 hours ago
5.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company: Indian / Global Engineering & Manufacturing Organization Key Skills: Machine Learning, ML, AI Artificial intelligence, Artificial Intelligence, Tensorflow, Python, Pytorch. Roles and Responsibilities: Design, build, and rigorously optimize the complete stack necessary for large-scale model training, fine-tuning, and inference--including dataloading, distributed training, and model deployment--to maximize Model Flop Utilization (MFU) on compute clusters. Collaborate closely with research scientists to translate state-of-the-art models and algorithms into production-grade, high-performance code and scalable infrastructure. Implement, integrate, and test advancements from recent research publications and open-source contributions into enterprise-grade systems. Profile training workflows to identify and resolve bottlenecks across all layers of the training stack--from input pipelines to inference--enhancing speed and resource efficiency. Contribute to evaluations and selections of hardware, software, and cloud platforms defining the future of the AI infrastructure stack. Use MLOps tools (e.g., MLflow, Weights & Biases) to establish best practices across the entire AI model lifecycle, including development, validation, deployment, and monitoring. Maintain extensive documentation of infrastructure architecture, pipelines, and training processes to ensure reproducibility and smooth knowledge transfer. Continuously research and implement improvements in large-scale training strategies and data engineering workflows to keep the organization at the cutting edge. Demonstrate initiative and ownership in developing rapid prototypes and production-scale systems for AI applications in the energy sector. Experience Requirement: 5-9 years of experience building and optimizing large-scale machine learning infrastructure, including distributed training and data pipelines. Proven hands-on expertise with deep learning frameworks such as PyTorch, JAX, or PyTorch Lightning in multi-node GPU environments. Experience in scaling models trained on large datasets across distributed computing systems. Familiarity with writing and optimizing CUDA, Triton, or CUTLASS kernels for performance enhancement is preferred. Hands-on experience with AI/ML lifecycle management using MLOps frameworks and performance profiling tools. Demonstrated collaboration with AI researchers and data scientists to integrate models into production environments. Track record of open-source contributions in AI infrastructure or data engineering is a significant plus. Education: M.E., B.Tech M.Tech (Dual), BCA, B.E., B.Tech, M. Tech, MCA. Show more Show less
Posted 14 hours ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: 1 Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood 2 Claim Denials Prediction - identifying high-risk claims before submission 3 Payment Amount Prediction - forecasting expected reimbursement amounts 4 Cash Flow Forecasting - predicting revenue timing and patterns 5 Patient-Related Models - enhancing patient financial experience and outcomes 6 Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment 1 Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) 2 Development Tools: Jupyter Notebooks, Git, Docker 3 Programming: Python, SQL, R (optional) 4 ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow 5 Data Processing: Spark, Pandas, NumPy 6 Visualization: Matplotlib, Seaborn, Plotly, Tableau Show more Show less
Posted 16 hours ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less
Posted 17 hours ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description We are seeking a highly skilled Senior Technical Architect with expertise in Databricks, Apache Spark, and modern data engineering architectures. The ideal candidate will have a strong grasp of Generative AI and RAG pipelines and a keen interest (or working knowledge) in Agentic AI systems. This individual will lead the architecture, design, and implementation of scalable data platforms and AI-powered applications for our global clients. This high-impact role requires technical leadership, cross-functional collaboration, and a passion for solving complex business challenges with data and AI. Key Responsibilities Lead architecture, design, and deployment of scalable data solutions using Databricks and the medallion architecture. Guide technical teams in building batch and streaming data pipelines using Spark, Delta Lake, and MLflow. Collaborate with clients and internal stakeholders to understand business needs and translate them into robust data and AI architectures. Design and prototype Generative AI applications using LLMs, RAG pipelines, and vector stores. Provide thought leadership on the adoption of Agentic AI systems in enterprise environments. Mentor data engineers and solution architects across multiple projects. Ensure adherence to security, governance, performance, and reliability best practices. Stay current with emerging trends in data engineering, MLOps, GenAI, and agent-based systems. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related technical discipline. 10+ years of experience in data architecture, data engineering, or software architecture roles. 5+ years of hands-on experience with Databricks, including Spark SQL, Delta Lake, Unity Catalog, and MLflow. Proven experience in designing and delivering production-grade data platforms and pipelines. Exposure to LLM frameworks (OpenAI, Hugging Face, LangChain, etc.) and vector databases (FAISS, Weaviate, etc.). Strong understanding of cloud platforms (Azure, AWS, or GCP), particularly in the context of Databricks deployment. Knowledge or interest in Agentic AI frameworks and multi-agent system design is highly desirable. Technical Skills Databricks (incl. Spark, Delta Lake, MLflow, Unity Catalog) Python, SQL, PySpark GenAI tools and libraries (LangChain, OpenAI, etc.) CI/CD and DevOps for data REST APIs, JSON, data serialization formats Cloud services (Azure/AWS/GCP) Soft Skills Strong communication and stakeholder management skills Ability to lead and mentor diverse technical teams Strategic thinking with a bias for action Comfortable with ambiguity and iterative development Client-first mindset and consultative approach Excellent problem-solving and analytical skills Preferred Certifications Databricks Certified Data Engineer / Architect Cloud certifications (Azure/AWS/GCP) Any certifications in AI/ML, NLP, or GenAI frameworks are a plus Show more Show less
Posted 17 hours ago
2.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Technical Expertise : (minimum 2 year relevant experience) ● Solid understanding of Generative AI models and Natural Language Processing (NLP) techniques, including Retrieval-Augmented Generation (RAG) systems, text generation, and embedding models. ● Exposure to Agentic AI concepts, multi-agent systems, and agent development using open-source frameworks like LangGraph and LangChain. ● Hands-on experience with modality-specific encoder models (text, image, audio) for multi-modal AI applications. ● Proficient in model fine-tuning, prompt engineering, using both open-source and proprietary LLMs. ● Experience with model quantization, optimization, and conversion techniques (FP32 to INT8, ONNX, TorchScript) for efficient deployment, including edge devices. ● Deep understanding of inference pipelines, batch processing, and real-time AI deployment on both CPU and GPU. ● Strong MLOps knowledge with experience in version control, reproducible pipelines, continuous training, and model monitoring using tools like MLflow, DVC, and Kubeflow. ● Practical experience with scikit-learn, TensorFlow, and PyTorch for experimentation and production-ready AI solutions. ● Familiarity with data preprocessing, standardization, and knowledge graphs (nice to have). ● Strong analytical mindset with a passion for building robust, scalable AI solutions. ● Skilled in Python, writing clean, modular, and efficient code. ● Proficient in RESTful API development using Flask, FastAPI, etc., with integrated AI/ML inference logic. ● Experience with MySQL, MongoDB, and vector databases like FAISS, Pinecone, or Weaviate for semantic search. ● Exposure to Neo4j and graph databases for relationship-driven insights. ● Hands-on with Docker and containerization to build scalable, reproducible, and portable AI services. ● Up-to-date with the latest in GenAI, LLMs, Agentic AI, and deployment strategies. ● Strong communication and collaboration skills, able to contribute in cross-functional and fast-paced environments. Bonus Skills ● Experience with cloud deployments on AWS, GCP, or Azure, including model deployment and model inferencing. ● Working knowledge of Computer Vision and real-time analytics using OpenCV, YOLO, and similar Show more Show less
Posted 17 hours ago
10.0 years
0 Lacs
India
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Summary We are looking for a highly experienced Senior Technical Architect – Gen AI/ML who will play a key role in shaping and driving the architecture and technology strategy for Generative AI and Machine Learning based applications. This is a hands-on leadership role requiring deep technical expertise in AI/ML, strong knowledge of cloud platforms (preferably Microsoft Azure), and the ability to design scalable, secure, and high-performance systems. The ideal candidate is a solution visionary and technical leader who can partner with cross-functional teams to define, design, and deliver cutting-edge AI-driven solutions. Key Responsibilities Solution Architecture & Design Architect end-to-end AI/ML and Gen AI solutions tailored to business needs across multiple domains. Define system architecture, technology stack, and integration strategy for AI-enabled products and platforms. Establish architecture best practices, reusable components, and reference implementations. Hands-on Implementation Lead from the front with prototyping, POCs, and reference models using tools like Azure OpenAI, Hugging Face, LangChain, MLflow, PyTorch, TensorFlow, etc. Drive integration of AI/LLM solutions (RAG, multi-agent systems, embeddings, etc.) into enterprise applications. Evaluate and optimize model performance using advanced metrics and testing. Ensure performance tuning, security, scalability, and reliability of AI solutions. Cloud & MLOps Leverage Azure cloud services such as Azure Machine Learning, Azure OpenAI, Cognitive Services, Azure Kubernetes Service, etc. Design and implement CI/CD pipelines, model versioning, deployment, monitoring, and governance frameworks using MLOps best practices. Technical Skills Strong expertise in Generative AI, LLMs, and ML/DL models. GenAI & LLMs: RAG, LangChain, LangGraph, LlamaIndex, Semantic Kernel, CrewAI, Autogen, TaskWeave RAG Evaluation with experience in RAGA frameworks, DeepEval Hands-on with Azure AI stack – Azure OpenAI, Azure ML, Cognitive Services, Synapse, AKS, with familiarity in AWS/GCP as a plus Proficient in Python, C#, Microservices, PyTorch, TensorFlow, LangChain, Vector DBs (Pinecone, LanceDB, FAISS, etc.) Solid understanding of model fine-tuning, prompt engineering, RLHF, and embedding techniques. Deep experience in designing and deploying ML/AI pipelines using Azure, MLOps, Docker, Kubernetes, and API Gateways. Goos understanding of LLM-powered agents, toolchains (e.g., LangChain Agents, Semantic Kernel), and how to embed reasoning and memory into AI-driven solutions. Experience designing and implementing MCP-based orchestration layers to modularize and scale interactions with LLMs and Gen AI components. Experience 10+ years in software architecture and enterprise application development. 3+ years of experience leading Gen AI / ML solutions in production environments. Experience with enterprise-grade solutioning, technical due diligence, and stakeholder management. Proven record of working on Brownfield project. Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy. Show more Show less
Posted 18 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
neoBIM is a well-funded start-up software company revolutionizing the way architects design buildings with our innovative BIM (Building Information Modelling) software. As we continue to grow, we are building a small and talented team of developers to drive our software forward. Tasks We are looking for a highly skilled Generative AI Developer to join our AI team. The ideal candidate should have strong expertise in deep learning, large language models (LLMs), multimodal AI, and generative models (GANs, VAEs, Diffusion Models, or similar techniques) . This role offers the opportunity to work on cutting-edge AI solutions, from training models to deploying AI-driven applications that redefine automation and intelligence. Develop, fine-tune, and optimize Generative AI models , including LLMs, GANs, VAEs, Diffusion Models, and Transformer-based architectures . Work with large-scale datasets and design self-supervised or semi-supervised learning pipelines . Implement multimodal AI systems that combine text, images, audio, and structured data. Optimize AI model inference for real-time applications and large-scale deployment. Build AI-driven applications for BIM (Building Information Modeling), content generation, and automation . Collaborate with data scientists, software engineers, and domain experts to integrate AI into production. Stay ahead of AI research trends and incorporate state-of-the-art methodologies . Deploy models using cloud-based ML pipelines (AWS/GCP/Azure) and edge computing solutions . Requirements Must-Have Skills Strong programming skills in Python (PyTorch, TensorFlow, JAX, or equivalent). Experience in training and fine-tuning Large Language Models (LLMs) like GPT, BERT, LLaMA, or Mixtral . Expertise in Generative AI techniques , including Diffusion Models (e.g., Stable Diffusion, DALL-E, Imagen), GANs, VAEs . Hands-on experience with transformer-based architectures (e.g., Vision Transformers, BERT, T5, GPT, etc.) . Experience with MLOps frameworks for scaling AI applications (Docker, Kubernetes, MLflow, etc.). Proficiency in data preprocessing, feature engineering, and AI pipeline development . Strong background in mathematics, statistics, and optimization related to deep learning. Good-to-Have Skills Experience in NeRFs (Neural Radiance Fields) for 3D generative AI . Knowledge of AI for Architecture, Engineering, and Construction (AEC) . Understanding of distributed computing (Ray, Spark, or Tensor Processing Units). Familiarity with AI model compression and inference optimization (ONNX, TensorRT, quantization techniques) . Experience in cloud-based AI development (AWS/GCP/Azure) . Benefits Work on high-impact AI projects at the cutting edge of Generative AI . Competitive salary with growth opportunities. Access to high-end computing resources for AI training & development. A collaborative, research-driven culture focused on innovation & real-world impact . Flexible work environment with remote options. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Title: Senior Machine Learning Engineer Location: On Site / [Gurgaon, India] Experience: 5+ Years Type: Full-time / Contract About the Role We are looking for an experienced Machine Learning Engineer with a strong background in building, deploying, and scaling ML models in production environments. You will work closely with Data Scientists, Engineers, and Product teams to translate business challenges into data-driven solutions and build robust, scalable ML pipelines. This is a hands-on role requiring a blend of applied machine learning, data engineering, and software development skills. Key Responsibilities Design, build, and deploy machine learning models to solve real-world business problems Work on the end-to-end ML lifecycle: data preprocessing, feature engineering, model selection, training, evaluation, deployment, and monitoring Collaborate with cross-functional teams to identify opportunities for machine learning across products and workflows Develop and optimize scalable data pipelines to support model development and inference Implement model retraining, versioning, and performance tracking in production Ensure models are interpretable, explainable, and aligned with fairness, ethics, and compliance standards Continuously evaluate new ML techniques and tools to improve accuracy and efficiency Document processes, experiments, and findings for reproducibility and team knowledge-sharing Requirements 5+ years of hands-on experience in machine learning, applied data science, or related roles Strong foundation in ML algorithms (regression, classification, clustering, NLP, time series, etc.) Experience with production-level ML deployment using tools like MLflow, Kubeflow, Airflow, FastAPI , or similar Proficiency in Python and libraries like scikit-learn, TensorFlow, PyTorch, XGBoost, pandas, NumPy Experience with cloud platforms (AWS, GCP, or Azure) and containerized environments (Docker, Kubernetes) Strong understanding of software engineering principles and experience with Git, CI/CD, and version control Experience with large datasets, distributed systems (Spark/Databricks), and SQL/NoSQL databases Excellent problem-solving, communication, and collaboration skills Nice to Have Experience with LLMs, Generative AI , or transformer-based models Familiarity with MLOps best practices and infrastructure as code (e.g., Terraform) Experience working in regulated industries (e.g., finance, healthcare) Contributions to open-source projects or ML research papers Why Join Us Work on impactful problems with cutting-edge ML technologies Collaborate with a diverse, expert team across engineering, data, and product Flexible working hours and remote-first culture Opportunities for continuous learning, mentorship, and growth Show more Show less
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Organization Snapshot: Birdeye is the leading all-in-one Experience Marketing platform , trusted by over 100,000+ businesses worldwide to power customer acquisition, engagement, and retention through AI-driven automation and reputation intelligence. From local businesses to global enterprises, Birdeye enables brands to deliver exceptional customer experiences across every digital touchpoint. As we enter our next phase of global scale and product-led growth , AI is no longer an add-on—it’s at the very heart of our innovation strategy . Our future is being built on Large Language Models (LLMs), Generative AI, Conversational AI, and intelligent automation that can personalize and enhance every customer interaction in real time. Job Overview: Birdeye is seeking a Senior Data Scientist – NLP & Generative AI to help reimagine how businesses interact with customers at scale through production-grade, LLM-powered AI systems . If you’re passionate about building autonomous, intelligent, and conversational systems , this role offers the perfect platform to shape the next generation of agentic AI technologies. As part of our core AI/ML team, you'll design, deploy, and optimize end-to-end intelligent systems —spanning LLM fine-tuning , Conversational AI , Natural Language Understanding (NLU) , Retrieval-Augmented Generation (RAG) , and Autonomous Agent frameworks . This is a high-impact IC role ideal for technologists who thrive at the intersection of deep NLP research and scalable engineering . Key Responsibilities: LLM, GenAI & Agentic AI Systems Architect and deploy LLM-based frameworks using GPT, LLaMA, Claude, Mistral, and open-source models. Implement fine-tuning , LoRA , PEFT , instruction tuning , and prompt tuning strategies for production-grade performance. Build autonomous AI agents with tool use , short/long-term memory , planning , and multi-agent orchestration (using LangChain Agents, Semantic Kernel, Haystack, or custom frameworks). Design RAG pipelines with vector databases ( Pinecone , FAISS , Weaviate ) for domain-specific contextualization. Conversational AI & NLP Engineering Build Transformer-based Conversational AI systems for dynamic, goal-oriented dialog—leveraging orchestration tools like LangChain, Rasa, and LLMFlow. Implement NLP solutions for semantic search , NER , summarization , intent detection , text classification , and knowledge extraction . Integrate modern NLP toolkits: SpaCy, BERT/RoBERTa, GloVe, Word2Vec, NLTK , and HuggingFace Transformers . Handle multilingual NLP, contextual embeddings, and dialogue state tracking for real-time systems. Scalable AI/ML Engineering Build and serve models using Python , FastAPI , gRPC , and REST APIs . Containerize applications with Docker , deploy using Kubernetes , and orchestrate with CI/CD workflows. Ensure production-grade reliability, latency optimization, observability, and failover mechanisms. Cloud & MLOps Infrastructure Deploy on AWS SageMaker , Azure ML Studio , or Google Vertex AI , integrating with serverless and auto-scaling services. Own end-to-end MLOps pipelines : model training, versioning, monitoring, and retraining using MLflow , Kubeflow , or TFX . Cross-Functional Collaboration Partner with Product, Engineering, and Design teams to define AI-first experiences. Translate ambiguous business problems into structured ML/AI projects with measurable ROI. Contribute to roadmap planning, POCs, technical whitepapers, and architectural reviews. Technical Skillset Required Programming : Expert in Python , with strong OOP and data structure fundamentals. Frameworks : Proficient in PyTorch , TensorFlow , Hugging Face Transformers , LangChain , OpenAI/Anthropic APIs . NLP/LLM : Strong grasp of Transformer architecture , Attention mechanisms , self-supervised learning , and LLM evaluation techniques . MLOps : Skilled in CI/CD tools, FastAPI , Docker , Kubernetes , and deployment automation on AWS/Azure/GCP . Databases : Hands-on with SQL/NoSQL databases, Vector DBs , and retrieval systems. Tooling : Familiarity with Haystack , Rasa , Semantic Kernel , LangChain Agents , and memory-based orchestration for agents. Applied Research : Experience integrating recent GenAI research (AutoGPT-style agents, Toolformer, etc.) into production systems. Bonus Points Contributions to open-source NLP or LLM projects. Publications in AI/NLP/ML conferences or journals. Experience in Online Reputation Management (ORM) , martech, or CX platforms. Familiarity with reinforcement learning , multi-modal AI , or few-shot learning at scale. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Kondapur, Telangana, India
On-site
What You'll Do Design & build backend components of our MLOps platform in Python on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What You Know At least 3+ years of professional backend development experience with Python. Experience with web development frameworks such as Flask or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with containers (Docker) and container platforms like AWS ECS or AWS EKS Experience with unit and functional testing frameworks. Experience with public cloud platforms like AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Education Bachelor’s degree in Computer Science, Information Systems, Engineering, Computer Applications, or related field. Benefits In addition to competitive salaries and benefits packages, Nisum India offers its employees some unique and fun extras: Continuous Learning - Year-round training sessions are offered as part of skill enhancement certifications sponsored by the company on an as need basis. We support our team to excel in their field. Parental Medical Insurance - Nisum believes our team is the heart of our business and we want to make sure to take care of the heart of theirs. We offer opt-in parental medical insurance in addition to our medical benefits. Activities -From the Nisum Premier League's cricket tournaments to hosted Hack-a-thon, Nisum employees can participate in a variety of team building activities such as skits, dances performance in addition to festival celebrations. Free Meals - Free snacks and dinner is provided on a daily basis, in addition to subsidized lunch. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Show more Show less
Posted 2 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Must Have Skills: Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG (Retrieval-Augmented Generation) workflows. Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience with Oracle Fusion Cloud Applications (ERP and HCM) and their AI capabilities. Proficiency in developing AI chatbots using ODA (Oracle digital Assistance), Fusion APIs, OIC (Oracle integration cloud) and genAI services. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in programming languages (e.g., Java, Python) and databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference. Understanding of the security and compliance requirements for ML/GenAI implementations. /IC3 Career Level - IC3 Responsibilities Be a leader, a ‘doer’, with a vision & desire to contribute to a business with a technical project focus and mentality that centers around a superior customer experience. Bring entrepreneurial & innovative flair, with the tenacity to develop and prove delivery ideas independently, then share these as examples to improve the efficiency of the delivery of these solutions across EMEA. Working effectively across internal, external and culturally diverse lines of business to define and deliver customer success. Proactive approach to the task and self-learner. Self-motivated, you have the natural drive to learn and pick up new challenges. Results orientation, you won’t be satisfied until the job is done with the right quality! Ability to work in (virtual) teams. To get a specific job done often requires working together with many colleagues spread out in different countries. Comfortable in a collaborative, agile environment. Ability to adapt to change with a positive mindset. Good communication skills, both oral and written! You are able to not only understand complex technologies but also to explain to others in clear and simple terms. You can articulate key messages very clearly. Being able to present in front of customers. https://www.oracle.com/in/cloud/cloud-lift/ About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 days ago
40.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Must Have Skills: Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG (Retrieval-Augmented Generation) workflows. Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience with Oracle Fusion Cloud Applications (ERP and HCM) and their AI capabilities. Proficiency in developing AI chatbots using ODA (Oracle digital Assistance), Fusion APIs, OIC (Oracle integration cloud) and genAI services. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Excellent communication and collaboration skills with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in programming languages (e.g., Java, Python) and databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Knowledge of GPU cluster architecture and the ability to leverage parallel processing for accelerated model training and inference. Understanding of the security and compliance requirements for ML/GenAI implementations. /IC3 Career Level - IC3 Responsibilities Be a leader, a ‘doer’, with a vision & desire to contribute to a business with a technical project focus and mentality that centers around a superior customer experience. Bring entrepreneurial & innovative flair, with the tenacity to develop and prove delivery ideas independently, then share these as examples to improve the efficiency of the delivery of these solutions across EMEA. Working effectively across internal, external and culturally diverse lines of business to define and deliver customer success. Proactive approach to the task and self-learner. Self-motivated, you have the natural drive to learn and pick up new challenges. Results orientation, you won’t be satisfied until the job is done with the right quality! Ability to work in (virtual) teams. To get a specific job done often requires working together with many colleagues spread out in different countries. Comfortable in a collaborative, agile environment. Ability to adapt to change with a positive mindset. Good communication skills, both oral and written! You are able to not only understand complex technologies but also to explain to others in clear and simple terms. You can articulate key messages very clearly. Being able to present in front of customers. https://www.oracle.com/in/cloud/cloud-lift/ About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Summary Designs, develops, tests, debugs and implements more complex operating systems components, software tools, and utilities with full competency. Coordinates with users to determine requirements. Reviews systems under development and related documentation. Makes more complex modifications to existing software to fit specialized needs and configurations, and maintains program libraries and technical documentation. May coordinate activities of the project team and assist in monitoring project schedules and costs. Essential Duties And Responsibilities Lead and Manage configuration, maintenance, and support of portfolio of AI models and related products. Manage model delivery to Production deployment team and coordinate model production deployments. Ability to analyze complex data requirements, understand exploratory data analysis, and design solutions that meet business needs. Work on analyzing data profiles, transformation, quality and security with the dev team to build and enhance data pipelines while maintaining proper quality and control around the data sets. Work closely with cross-functional teams, including business analysts, data engineers, and domain experts. Understand business requirements and translate them into technical solutions. Understand and review the business use cases for data pipelines for the Data Lake including ingestion, transformation and storing in the Lakehouse. Present architecture and solutions to executive-level. Minimum Qualifications Bachelor's or master's degree in computer science, Engineering, or related technical field Minimum of 5 years' experience in building data pipelines for both structured and unstructured data. At least 2 years' experience in Azure data pipeline development. Preferably 3 or more years' experience with Hadoop, Azure Databricks, Stream Analytics, Eventhub, Kafka, and Flink. Strong proficiency in Python and SQL Experience with big data technologies (Spark, Hadoop, Kafka) Familiarity with ML frameworks (TensorFlow, PyTorch, scikit-learn) Knowledge of model serving technologies (TensorFlow Serving, MLflow, KubeFlow) will be a plus Experience with one pof the cloud platforms (Azure preferred) and their Data Services. Understanding ML services will get preference. Understanding of containerization and orchestration (Docker, Kubernetes) Experience with data versioning and ML experiment tracking will be great addition Knowledge of distributed computing principles Familiarity with DevOps practices and CI/CD pipelines Preferred Qualifications Bachelor's degree in Computer Science or equivalent work experience. Experience with Agile/Scrum methodology. Experience with tax and accounting domain a plus. Azure Data Scientist certification a plus. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: AI Engineer Experience: 6-12 Years Location: Bangalore, Gurugram (Work From Office – 5 Days) Employment Type: Full-Time Interested Candidates Apply Here :- https://forms.gle/h3pXgxc57kUB3UJX6 Job Description :We are looking for a seasoned AI Enginee r with experienc e in designing and deploying AI/ML solutions, with a strong focus on Generative AI (GenAI ) and Large Language Models (LLMs ). The ideal candidate should have deep expertise in Pytho n, machine learnin g, and artificial intelligenc e, along with a strong understanding of real-world AI productization .As part of our advanced tech team in Bangalor e, you will play a critical role in building and integrating AI-powered applications that solve real business problems .Key Responsibilities :Design, develop, and deploy AI/ML solutions with a focus on Generative A I and LLM s (e.g., GPT, BERT, Claude, etc. )Fine-tune and customize foundation models for specific domain or business requirement sDevelop intelligent systems using Pytho n and modern ML framework sCollaborate with cross-functional teams including product, data engineering, and design to integrate AI into core product sEvaluate, test, and implement third-party APIs, libraries, and models relevant to AI/ML/LLM task sOptimize model performance, latency, and scalability for production us eStay up to date with the latest advancements in AI, machine learning, and GenAI technologie sRequired Skills :9+ years of professional experience in AI/M L role sStrong expertise in Generative A I, LLM s, and natural language processing (NLP )Proficient in Pytho n and libraries like TensorFlow, PyTorch, Hugging Face Transformer sSolid background in machine learning algorithm s, deep learning, and model evaluation technique sExperience deploying and maintaining ML models in productio nStrong problem-solving skills and ability to handle ambiguity in requirement sGood to Have :Experience with prompt engineering and few-shot/fine-tuning technique sKnowledge of MLOps practices and tools (e.g., MLflow, Kubeflow, Airflow )Familiarity with vector databases and retrieval-augmented generation (RAG) pipeline sCloud experience (AWS, GCP, or Azure ) Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Title: Computer Vision Engineer Location: Coimbatore, Work from office role Experience Required: 2+ years Employment Type: Full-Time, Permanent Company: Katomaran Technologies About Us Katomaran Technologies is a cutting-edge technology company building real-world AI applications that span across computer vision, large language models (LLMs), and AI agentic systems. We are looking for a highly motivated Senior AI Developer who thrives at the intersection of technology, leadership, and innovation. You will play a core role in architecting AI products, leading engineering teams, and collaborating directly with the founding team and customers to turn vision into scalable, production-ready solutions. Key Responsilbilities Architect and develop scalable AI solutions using computer vision, LLMs, and agent-based AI architectures. Collaborate with the founding team to define product roadmaps and AI strategy. Lead and mentor a team of AI and software engineers, ensuring high code quality and project delivery timelines. Develop robust, efficient pipelines for model training, validation, deployment, and real-time inference. Work closely with customers and internal stakeholders to translate requirements into AI-powered applications. Stay up to date with state-of-the-art research in vision models (YOLO, SAM, CLIP, etc.), transformers, and agentic systems (AutoGPT-style orchestration). Optimize AI models for deployment on cloud and edge environments. Required Skills and Qualifications Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related fields. 2+ years of hands-on experience building AI applications in computer vision and/or NLP. Strong knowledge of Deep Learning frameworks (PyTorch, TensorFlow, OpenCV, HuggingFace, etc.). Proven experience with LLM fine-tuning, prompt engineering, and embedding-based retrieval (RAG). Solid understanding of agentic systems such as LangGraph, CrewAI, AutoGen, or custom orchestrators. Ability to design and manage production-grade AI systems (Docker, REST APIs, GPU optimization, etc.). Strong communication and leadership skills, with experience managing small to mid-size teams. Startup mindset – self-driven, ownership-oriented, and comfortable in ambiguity. Nice to have Experience with video analytics platforms or edge deployment (Jetson, Coral, etc.). Experience with programming skills in C++ will be an added advantage Knowledge of MLOps practices and tools (MLflow, Weights & Biases, ClearML, etc.). Exposure to Reinforcement Learning or multi-agent collaboration models. Customer-facing experience or involvement in AI product strategy. What we offer Medical insurance Paid sick time Paid time off PF To Apply: Send your resume, GitHub/portfolio, and a brief note about your most exciting AI project to hr@katomaran.com. Show more Show less
Posted 2 days ago
4.0 - 6.0 years
6 - 8 Lacs
Chennai
On-site
Designation: Senior Analyst – Data Science Level: L2 Experience: 4 to 6 years Location: Chennai Job Description: We are seeking an experienced MLOps Engineer with 4-6 years of experience to join our dynamic team. In this role, you will build and maintain robust machine learning infrastructure that enables our data science team to deploy and scale models for credit risk assessment, fraud detection, and revenue forecasting. The ideal candidate has extensive experience with MLOps tools, production deployment, and scaling ML systems in financial services environments. Responsibilities: Design, build, and maintain scalable ML infrastructure for deploying credit risk models, fraud detection systems, and revenue forecasting models to production Implement and manage ML pipelines using Metaflow for model development, training, validation, and deployment Develop CI/CD pipelines for machine learning models ensuring reliable and automated deployment processes Monitor model performance in production and implement automated retraining and rollback mechanisms Collaborate with data scientists to productionize models and optimize them for performance and scalability Implement model versioning, experiment tracking, and metadata management systems Build monitoring and alerting systems for model drift, data quality, and system performance Manage containerization and orchestration of ML workloads using Docker and Kubernetes Optimize model serving infrastructure for low-latency predictions and high throughput Ensure compliance with financial regulations and implement proper model governance frameworks Skills: 4-6 years of professional experience in MLOps, DevOps, or ML engineering, preferably in fintech or financial services Strong expertise in deploying and scaling machine learning models in production environments Extensive experience with Metaflow for ML pipeline orchestration and workflow management Advanced proficiency with Git and version control systems, including branching strategies and collaborative workflows Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes) Strong programming skills in Python with experience in ML libraries (pandas, numpy, scikit-learn) Experience with CI/CD tools and practices for ML workflows Knowledge of distributed computing and cloud-based ML infrastructure Understanding of model monitoring, A/B testing, and feature store management. Additional Skillsets: Experience with Hex or similar data analytics platforms Knowledge of credit risk modeling, fraud detection, or revenue forecasting systems Experience with real-time model serving and streaming data processing Familiarity with MLFlow, Kubeflow, or other ML lifecycle management tools Understanding of financial regulations and model governance requirements Job Snapshot Updated Date 13-06-2025 Job ID J_3745 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
West Bengal, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2