Jobs
Interviews

1795 Mlflow Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking an experienced MDM Engineer with 8–12 years of experience to lead development and operations of our Master Data Management (MDM) platforms, with hands-on experience in data engineering experience. This role will involve handling the backend data engineering solution within MDM team. This is a technical role that will require hands-on work. To succeed in this role, the candidate must have strong Data Engineering experience. Candidate must have experience on technologies like (SQL, Python, PySpark, Databricks, AWS, API Integrations etc). Roles & Responsibilities: Develop distributed data pipelines using PySpark on Databricks for ingesting, transforming, and publishing master data Write optimized SQL for large-scale data processing, including complex joins, window functions, and CTEs for MDM logic Implement match/merge algorithms and survivorship rules using Informatica MDM or Reltio APIs Build and maintain Delta Lake tables with schema evolution and versioning for master data domains Use AWS services like S3, Glue, Lambda, and Step Functions for orchestrating MDM workflows Automate data quality checks using IDQ or custom PySpark validators with rule-based profiling Integrate external enrichment sources (e.g., D&B, LexisNexis) via REST APIs and batch pipelines Design and deploy CI/CD pipelines using GitHub Actions or Jenkins for Databricks notebooks and jobs Monitor pipeline health using Databricks Jobs API, CloudWatch, and custom logging frameworks Implement fine-grained access control using Unity Catalog and attribute-based policies for MDM datasets Use MLflow for tracking model-based entity resolution experiments if ML-based matching is applied Collaborate with data stewards to expose curated MDM views via REST endpoints or Delta Sharing Basic Qualifications and Experience: 8 to 13 years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Advanced proficiency in PySpark for distributed data processing and transformation Strong SQL skills for complex data modeling, cleansing, and aggregation logic Hands-on experience with Databricks including Delta Lake, notebooks, and job orchestration Deep understanding of MDM concepts including match/merge, survivorship, and golden record creation Experience with MDM platforms like Informatica MDM or Reltio, including REST API integration Proficiency in AWS services such as S3, Glue, Lambda, Step Functions, and IAM Familiarity with data quality frameworks and tools like Informatica IDQ or custom rule engines Experience building CI/CD pipelines for data workflows using GitHub Actions, Jenkins, or similar Knowledge of schema evolution, versioning, and metadata management in data lakes Ability to implement lineage and observability using Unity Catalog or third-party tools Comfort with Unix shell scripting or Python for orchestration and automation Hands on experience on RESTful APIs for ingesting external data sources and enrichment feeds Good-to-Have Skills: Experience with Tableau or PowerBI for reporting MDM insights. Exposure to Agile practices and tools (JIRA, Confluence). Prior experience in Pharma/Life Sciences. Understanding of compliance and regulatory considerations in master data. Professional Certifications : Any MDM certification (e.g. Informatica, Reltio etc) Any Data Analysis certification (SQL, Python, PySpark, Databricks) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. GCF Level 05A

Posted 6 days ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Role: Azure Data Engineer Location: Bangalore | Hyderabad | Noida | Gurgaon | Pune Experience: 5+ Years Notice Period: Immediate Joiners Only Job Descriptions: Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 5-8 years of experience in data engineering with a focus on Azure cloud technologies . Strong proficiency in Azure Databricks , Apache Spark, and PySpark. Hands-on experience with Azure Synapse Analytics , including dedicated SQL pools and serverless SQL. Proficiency in SQL, Python, and ETL/ELT processes. Experience with Azure Data Factory, Azure Data Lake, and Azure Blob Storage. Familiarity with data governance, security, and compliance in cloud environments. Excellent problem-solving and communication skills. Preferred Qualifications: Azure certifications such as Azure Data Engineer Associate (DP-203). Experience with Delta Lake, MLflow, and Power BI integration. Knowledge of DevOps practices and tools (e.g., Git, Azure DevOps). Experience in Agile/Scrum environments.

Posted 6 days ago

Apply

40.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Must Have Skills: Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Experience in design, build, and deployment of innovative applications utilizing Gen AI technologies such as RAG (Retrieval-Augmented Generation) based chatbots or AI Agents. Proficiency in programming using Python or Java. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Effective communication and presentation skills. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, TensorFlow, PyTorch, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC2/IC3 Responsibilities As a member of Oracle Cloud LIFT, you’ll help guide our customers from concept to successful cloud deployment. You’ll: Shape architecture and solution design with best practices and experience. Own the delivery of agreed workload implementations. Validate and test deployed solutions. Conduct security assurance reviews. You’ll work in a fast-paced, international environment, engaging with customers across industries and regions. You’ll collaborate with peers, sales, architects, and consulting teams to make cloud transformation real. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 6 days ago

Apply

5.0 years

0 Lacs

Greater Bengaluru Area

On-site

What if the work you did every day could impact the lives of people you know? Or all of humanity? At Illumina, we are expanding access to genomic technology to realize health equity for billions of people around the world. Our efforts enable life-changing discoveries that are transforming human health through the early detection and diagnosis of diseases and new treatment options for patients. Working at Illumina means being part of something bigger than yourself. Every person, in every role, has the opportunity to make a difference. Surrounded by extraordinary people, inspiring leaders, and world changing projects, you will do more and become more than you ever thought possible. Position Summary We are seeking a highly skilled Senior Data Engineer Developer with 5+ years of experience to join our talented team in Bangalore. In this role, you will be responsible for designing, implementing, and optimizing data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Additionally, you will bring strong domain expertise in operations organizations, with a focus on supply chain and manufacturing functions. If you're a seasoned data engineer with a proven track record of delivering impactful data solutions in operations contexts, we want to hear from you. Responsibilities Lead the design, development, and optimization of data pipelines, ETL processes, and data integration solutions using Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Apply strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing, to understand data requirements and deliver tailored solutions. Utilize big data processing frameworks such as Apache Spark to process and analyze large volumes of operational data efficiently. Implement data transformations, aggregations, and business logic to support analytics, reporting, and operational decision-making. Leverage cloud-based data platforms such as Snowflake to store and manage structured and semi-structured operational data at scale. Utilize dbt (Data Build Tool) for data modeling, transformation, and documentation to ensure data consistency, quality, and integrity. Monitor and optimize data pipelines and ETL processes for performance, scalability, and reliability in operations contexts. Conduct data profiling, cleansing, and validation to ensure data quality and integrity across different operational data sets. Collaborate closely with cross-functional teams, including operations stakeholders, data scientists, and business analysts, to understand operational challenges and deliver actionable insights. Stay updated on emerging technologies and best practices in data engineering and operations management, contributing to continuous improvement and innovation within the organization. All listed requirements are deemed as essential functions to this position; however, business conditions may require reasonable accommodations for additional task and responsibilities. Preferred Experience/Education/Skills Bachelor's degree in Computer Science, Engineering, Operations Management, or related field. 5+ years of experience in data engineering, with proficiency in Python, Spark, SQL, Snowflake, dbt, and other relevant technologies. Strong domain expertise in operations organizations, particularly in functions like supply chain and manufacturing. Strong domain expertise in life sciences manufacturing equipment, with a deep understanding of industry-specific challenges, processes, and technologies. Experience with big data processing frameworks such as Apache Spark and cloud-based data platforms such as Snowflake. Hands-on experience with data modeling, ETL development, and data integration in operations contexts. Familiarity with dbt (Data Build Tool) for managing data transformation and modeling workflows. Familiarity with reporting and visualization tools like Tableau, Powerbi etc. Good understanding of advanced data engineering and data science practices and technologies like pypark, sagemaker, cloudera MLflow etc. Experience with SAP, SAP HANA and Teamcenter applications is a plus. Excellent problem-solving skills, analytical thinking, and attention to detail. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams and operations stakeholders. Eagerness to learn and adapt to new technologies and tools in a fast-paced environment. We are a company deeply rooted in belonging, promoting an inclusive environment where employees feel valued and empowered to contribute to our mission. Built on a strong foundation, Illumina has always prioritized openness, collaboration, and seeking alternative perspectives to propel innovation in genomics. We are proud to confirm a zero-net gap in pay, regardless of gender, ethnicity, or race. We also have several Employee Resource Groups (ERG) that deliver career development experiences, increase cultural awareness, and offer opportunities to engage in social responsibility. We are proud to be an equal opportunity employer committed to providing employment opportunity regardless of sex, race, creed, color, gender, religion, marital status, domestic partner status, age, national origin or ancestry, physical or mental disability, medical condition, sexual orientation, pregnancy, military or veteran status, citizenship status, and genetic information. Illumina conducts background checks on applicants for whom a conditional offer of employment has been made. Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable local, state, and federal laws. Background check results may potentially result in the withdrawal of a conditional offer of employment. The background check process and any decisions made as a result shall be made in accordance with all applicable local, state, and federal laws. Illumina prohibits the use of generative artificial intelligence (AI) in the application and interview process. If you require accommodation to complete the application or interview process, please contact accommodations@illumina.com. To learn more, visit: https://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf. The position will be posted until a final candidate is selected or the requisition has a sufficient number of qualified applicants. This role is not eligible for visa sponsorship.

Posted 6 days ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Must Have Skills: Extensive knowledge of large language models, natural language processing techniques and prompt engineering. Experience in testing and validation processes to ensure the models' accuracy and efficiency in real-world scenarios. Experience in design, build, and deployment of innovative applications utilizing Gen AI technologies such as RAG (Retrieval-Augmented Generation) based chatbots or AI Agents. Proficiency in programming using Python or Java. Familiarity with Oracle Cloud Infrastructure or similar cloud platforms. Effective communication and presentation skills. Analyzes problems, identifies solutions, and makes decisions. Demonstrates a willingness to learn, adapt, and grow professionally. Good to Have Skills: Experience in LLM architectures, model evaluation, and fine-tuning techniques. Hands-on experience with emerging LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, TensorFlow, PyTorch, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Proficiency in databases (e.g., Oracle, MySQL), developing and executing AI over any of the cloud data platforms, associated data stores, Graph Stores, Vector Stores and pipelines. Understanding of the security and compliance requirements for ML/GenAI implementations. Career Level - IC2/IC3 Responsibilities As a member of Oracle Cloud LIFT, you’ll help guide our customers from concept to successful cloud deployment. You’ll: Shape architecture and solution design with best practices and experience. Own the delivery of agreed workload implementations. Validate and test deployed solutions. Conduct security assurance reviews. You’ll work in a fast-paced, international environment, engaging with customers across industries and regions. You’ll collaborate with peers, sales, architects, and consulting teams to make cloud transformation real. https://www.oracle.com/in/cloud/cloud-lift/ Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨‍💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑‍💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.

Posted 6 days ago

Apply

1.0 years

0 Lacs

Greater Nashik Area

On-site

Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer

Posted 6 days ago

Apply

5.0 years

30 - 32 Lacs

Greater Hyderabad Area

On-site

Experience : 5.00 + years Salary : INR 3000000-3200000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Hyderabad) Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: InfraCloud Technologies Pvt Ltd) (*Note: This is a requirement for one of Uplers' client - IF) What do you need for this opportunity? Must have skills required: Banking, Fintech, Product Engineering background, Python, FastAPI, Django, Machine learning (ML) IF is Looking for: Product Engineer Location: Narsingi, Hyderabad 5 days of work from the Office Client is a Payment gateway processing company Interview Process: Screening round with InfraCloud, followed by a second round with our Director of Engineering. We share the profile with the client, and they take one/two interviews About The Project We are building a high-performance machine learning engineering platform that powers scalable, data-driven solutions for enterprise environments. Your expertise in Python, performance optimization, and ML tooling will play a key role in shaping intelligent systems for data science and analytics use cases. Experience with MLOps, SaaS products, or big data environments will be a strong plus. Role And Responsibilities Design, build, and optimize components of the ML engineering pipeline for scalability and performance. Work closely with data scientists and platform engineers to enable seamless deployment and monitoring of ML models. Implement robust workflows using modern ML tooling such as Feast, Kubeflow, and MLflow. Collaborate with cross-functional teams to design and scale end-to-end ML services across a cloud-native infrastructure. Leverage frameworks like NumPy, Pandas, and distributed compute environments to manage large-scale data transformations. Continuously improve model deployment pipelines for reliability, monitoring, and automation. Requirements 5+ years of hands-on experience in Python programming with a strong focus on performance tuning and optimization. Solid knowledge of ML engineering principles and deployment best practices. Experience with Feast, Kubeflow, MLflow, or similar tools. Deep understanding of NumPy, Pandas, and data processing workflows. Exposure to big data environments and a good grasp of data science model workflows. Strong analytical and problem-solving skills with attention to detail. Comfortable working in fast-paced, agile environments with frequent cross-functional collaboration. Excellent communication and collaboration skills. Nice to Have Experience deploying ML workloads in public cloud environments (AWS, GCP, or Azure). Familiarity with containerization technologies like Docker and orchestration using Kubernetes. Exposure to CI/CD pipelines, serverless frameworks, and modern cloud-native stacks. Understanding of data protection, governance, or security aspects in ML pipelines. Experience Required: 5+ years How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 6 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description The Opportunity Are you an innovative technical expert, looking for an opportunity to use your technology skills and strengths to impact a product centric data and analytics organization within an enterprise? Are you looking for an opportunity where you will be able to help various products and services teams build their solutions in a modular fashion, which enables an end-to-end customer journey in an integrated fashion? Are you looking to go beyond just platforms and technologies and build true value add, self-serve data and analytics products? If so, come join us and help us build one of the industry’s most complete and advanced data and analytics eco system. As a MLOPs Analyst , you will be pivotal in shaping the future of our AI/ML technology landscape. We are looking for a skilled professional to join our team, responsible for bridging the gap between machine learning model development and operational deployment. In this role, you will focus on the implementation, monitoring, and optimization of end-to-end machine learning workflows, ensuring that AI/ML solutions are effectively integrated into our business processes and deliver measurable value. You will work closely with technical product managers, architects, and other stakeholders to design and implement enterprise-level AI/ML solutions at scale. You should be proactive in learning and experimenting with emerging AI/ML technologies, demonstrating a commitment to continuous improvement and innovation. What You'll Do Design and implement automated workflows for model training, validation, and deployment, utilizing CI/CD practices tailored for AI/ML. Collaborate with data scientists and engineers to deploy machine learning models into production environments, ensuring scalability and reliability. Establish monitoring systems to track model performance and drifts and implement strategies for model retraining and updates as necessary. Maintain comprehensive documentation of MLOps processes, deployment procedures, and operational guidelines. Assist in establishing efficient and secure enterprise standards for the pre-training, fine-tuning, and deployment of custom generative AI models. Evaluate new capabilities within AI/ML products, develop customized functionalities using programming languages such as R and Python, utilize AWS services and implement CI/CD model deployment strategies and solutions. Partner with product managers and architects to craft technical solutions that meet business requirements, platform integrations and create technical documentation, training materials, and knowledge-sharing resources related to AI/ML products. Comprehensive testing of AI/ML solutions and quality assurance to ensure the reliability, robustness, and accuracy. Offer technical expertise and assistance to internal stakeholders and implementation teams. Engage effectively with a global, multicultural team both in-person and virtually, across various products to identify and leverage synergies. Required Skills Practical experience with MLOps tools and frameworks (e.g., MLflow, Kubernetes), with a track record of implementing effective MLOps strategies. Understanding of CI/CD pipelines and DevOps practices as they relate to AI/ML. In-depth knowledge of AI/ML platforms like Databricks, Dataiku and Posit. Solid understanding of fundamental machine learning principles. Extensive expertise in the AWS cloud ecosystem, including familiarity with storage, database, streaming and messaging services, as well as AI/ML and cognitive services. Proficiency in R and Python programming languages, with a strong foundation in data engineering and data science principles. Bachelor’s degree in Computer Science, Information Technology, or a related field, along with 5+ years of experience in data science, with a strong focus on MLOps and successful deployment of machine learning models in production environments. Skills Needed Hands-on experience with version control systems like Git and containerization tools such as Docker. Experience in Agile environments and familiarity with the Scrum framework. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Design, Data Engineering, Data Modeling, Data Science, Data Visualization, Machine Learning, Software Development, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345614

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and self-driven Site Reliability Engineer to join our dynamic team. This role is ideal for someone with a strong foundation in Kubernetes, DevOps, and observability who can also support machine learning infrastructure, GPU optimization, and Big Data ecosystems. You will play a pivotal role in ensuring the reliability, scalability, and performance of our production systems, while also enabling innovation across ML and data teams. Key Responsibilities Automation & Reliability Design, build, and maintain Kubernetes clusters across hybrid or cloud environments (e.g., EKS, GKE, AKS). Implement and optimize CI/CD pipelines using tools like Jenkins, ArgoCD, and GitHub Actions. Develop and maintain Infrastructure as Code (IaC) using Ansible, Terraform, or & Observability : Deploy and maintain monitoring, logging, and tracing tools (e.g., Thanos, Prometheus, Grafana, Loki, Jaeger). Establish proactive alerting and observability practices to identify and address issues before they impact users. ML Ops & GPU Optimization Support and scale ML workflows using tools like Kubeflow, MLflow, and TensorFlow Serving. Work with data scientists to ensure efficient use of GPU resources, optimizing training and inference & Incident Management : Lead root cause analysis for infrastructure and application-level incidents. Participate in the on-call rotation and improve incident response & Automation : Automate operational tasks and service deployment using Python, Shell, Groovy, or Ansible. Write reusable scripts and tools to improve team productivity and reduce manual Learning & Collaboration : Stay up-to-date with emerging technologies in SRE, ML Ops, and observability. Collaborate with cross-functional teams including engineering, data science, and security to ensure system integrity and : 3+ years of experience as an SRE, DevOps Engineer, or equivalent role. Strong experience with Kubernetes ecosystem and container orchestration. Proficiency in DevOps tooling including Jenkins, ArgoCD, and GitOps workflows. Deep understanding of observability tools, with hands-on experience using Thanos and Prometheus stack. Experience with ML platforms (MLflow, Kubeflow) and supporting GPU workloads. Strong scripting skills in Python, Shell, Ansible, or : CKS (Certified Kubernetes Security Specialist) certification. Exposure to Big Data platforms (e.g., Spark, Kafka, Hadoop). Experience with cloud-native environments (AWS, GCP, or Azure). Background in infrastructure security and compliance. (ref:hirist.tech)

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Job We're looking for a highly skilled and self-driven Site Reliability Engineer (SRE-2) to join our team in Hyderabad. This is a full-time, work-from-office role (5 days a week) perfect for someone with 8-12 years of experience who thrives on challenges and is passionate about building robust, scalable, and highly available systems. You'll play a crucial role in ensuring the reliability, performance, and efficiency of our critical infrastructure and applications, with a particular focus on Kubernetes, DevOps, and observability. If you have hands-on experience with ML applications, GPU optimization, and Big Data systems, you'll be an ideal fit. Key Responsibilities As a Site Reliability Engineer (SRE-2), you will : Design, deploy, and manage highly available and scalable Kubernetes clusters and robust DevOps pipelines. Troubleshoot and resolve complex infrastructure and application issues across various environments. Implement, maintain, and enhance comprehensive observability solutions, with a strong emphasis on Thanos and related monitoring and alerting tools. Provide expert support for machine learning (ML) workflows, leveraging tools like MLflow and Kubeflow. Optimize applications to maximize performance in GPU-accelerated environments. Contribute individually to projects and proactively learn and adopt new technologies to stay ahead of industry trends. Automate repetitive tasks and streamline operational processes using a diverse set of scripting and automation tools including Python, Ansible, Groovy, and Shell scripting. Qualifications To be successful in this role, you should have : Strong, hands-on experience with Kubernetes and a deep understanding of core DevOps principles and tools. Proven expertise in observability and monitoring solutions, with a strong preference for experience with Thanos. Demonstrable experience working with ML platforms and optimizing applications for GPU-based environments. CKS (Certified Kubernetes Security Specialist) certification is preferred. Experience with Big Data systems is a significant plus. Proficiency in multiple scripting and automation languages : Python, Ansible, Groovy, and Shell scripting. Hands-on experience with CI/CD tools such as Jenkins, Ansible, and ArgoCD. (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title : Site Reliability Engineer Experience : 2 - 5 Years Location : Hyderabad Work Mode : Work From Office (5 Days a Week) Overview We are seeking a proactive and technically skilled Site Reliability Engineer with a strong background in Kubernetes and DevOps practices. This role requires a self-starter who is enthusiastic about automation, observability, and enhancing infrastructure reliability. Key Responsibilities Manage, monitor, and troubleshoot Kubernetes environments in production. Design, implement, and maintain CI/CD pipelines using tools like Jenkins, ArgoCD, and Ansible. Implement and maintain observability solutions (metrics, logs, traces). Automate infrastructure and operational tasks using scripting languages such as Python, Shell, Groovy, or Ansible. Support and optimize ML workflows, including platforms like MLflow and Kubeflow. Collaborate with cross-functional teams to ensure infrastructure scalability, availability, and performance. Qualifications Strong hands-on experience with Kubernetes and container orchestration. Solid understanding of DevOps tools and practices. Experience with observability platforms Familiarity with MLflow and Kubeflow is a strong plus. CKS (Certified Kubernetes Security Specialist) certification is preferred. Exposure to Big Data environments is an added advantage. Proficient in scripting with Python, Shell, Groovy, or Ansible. Hands-on experience with tools like Jenkins, Ansible, and Argo (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You are an experienced Senior DevOps/MLOps Engineer who will be responsible for leading and managing a high-performing engineering team. Your main focus will be on overseeing the deployment and scaling of machine learning models and backend services using modern DevOps and MLOps practices. It is essential that you have proficiency in FastAPI, Docker, Kubernetes, and CI/CD. Your key responsibilities will include guiding and managing a team of DevOps/MLOps engineers, optimizing, containerizing, and deploying FastAPI applications at scale, managing infrastructure using tools like Terraform or Helm, handling multi-environment Kubernetes clusters (GKE, EKS, AKS, or on-prem), managing ML model lifecycle including versioning, deployment, monitoring, and rollback, designing and maintaining robust CI/CD pipelines for model and application deployment, setting up observability tools such as Prometheus, Grafana, ELK, ensuring secure infrastructure and data pipelines. Your required skills should include a deep understanding of building, scaling, and securing APIs with FastAPI, expert-level experience in Docker and Kubernetes for containerization and orchestration, familiarity with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, ArgoCD, or similar, experience with cloud platforms such as AWS/GCP/Azure, strong scripting and automation skills with Python, and knowledge of ML Workflow Tools like MLflow, DVC, Kubeflow, or Seldon. Preferred qualifications for this role include experience in managing hybrid cloud/on-premise deployments, strong communication and mentoring skills, and an understanding of data pipelines, feature stores, and model drift monitoring. This is a full-time, permanent position that requires in-person work location.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a part of ZS, you will have the opportunity to work in a place driven by passion that aims to change lives. ZS is a management consulting and technology firm that is dedicated to enhancing life and its quality. The core strength of ZS lies in its people, who work collectively to develop transformative solutions for patients, caregivers, and consumers worldwide. By adopting a client-first approach, ZS employees bring impactful results to every engagement by partnering closely with clients to design custom solutions and technological products that drive value and yield positive outcomes in key areas of their business. Your role at ZS will require you to bring inquisitiveness for learning, innovative ideas, courage, and dedication to make a life-changing impact. At ZS, the individuals are highly valued, recognizing both the visible and invisible facets of their identities, personal experiences, and belief systems. These elements shape the uniqueness of each individual and contribute to the diverse tapestry within ZS. ZS acknowledges and celebrates personal interests, identities, and the thirst for knowledge as integral components of success within the organization. Learn more about the diversity, equity, and inclusion initiatives at ZS, along with the networks that support ZS employees in fostering community spaces, accessing necessary resources for growth, and amplifying the messages they are passionate about. As an Architecture & Engineering Specialist specializing in ML Engineering at ZS's India Capability & Expertise Center (CEC), you will be part of a team that constitutes over 60% of ZS employees across three offices in New Delhi, Pune, and Bengaluru. The CEC plays a pivotal role in collaborating with colleagues from North America, Europe, and East Asia to deliver practical solutions to clients that drive the company's operations. Upholding standards of analytical, operational, and technological excellence, the CEC leverages collective knowledge to enable ZS teams to achieve superior outcomes for clients. Joining ZS's Scaled AI practice within the Architecture & Engineering Expertise Center will immerse you in a dynamic ecosystem focused on generating continuous business value for clients through innovative machine learning, deep learning, and engineering capabilities. In this role, you will collaborate with data scientists to craft cutting-edge AI models, develop and utilize advanced ML platforms, establish and implement sophisticated ML pipelines, and oversee the entire ML lifecycle. **Responsibilities:** - Design and implement technical features using best practices for the relevant technology stack - Collaborate with client-facing teams to grasp the solution context, contribute to technical requirement gathering and analysis - Work alongside technical architects to validate design and implementation strategies - Write production-ready code that is easily testable, comprehensible to other developers, and addresses edge cases and errors - Ensure top-notch quality deliverables by adhering to architecture/design guidelines, coding best practices, and engaging in periodic design/code reviews - Develop unit tests and higher-level tests to handle expected edge cases, errors, and optimal scenarios - Utilize bug tracking, code review, version control, and other tools for organizing and delivering work - Participate in scrum calls, agile ceremonies, and effectively communicate progress, issues, and dependencies - Contribute consistently by researching and evaluating the latest technologies, conducting proofs-of-concept, and creating prototype solutions - Aid the project architect in designing modules/components of the overall project/product architecture - Break down large features into estimable tasks, lead estimation, and defend them with clients - Independently implement complex features with minimal guidance, such as service or application-wide changes - Systematically troubleshoot code issues/bugs using stack traces, logs, monitoring tools, and other resources - Conduct code/script reviews of senior engineers within the team - Mentor and cultivate technical talent within the team **Requirements:** - Minimum 5+ years of hands-on experience in deploying and productionizing ML models at scale - Proficiency in scaling GenAI or similar applications to accommodate high user traffic, large datasets, and reduce response time - Strong expertise in developing RAG-based pipelines using frameworks like LangChain & LlamaIndex - Experience in crafting GenAI applications such as answering engines, extraction components, and content authoring - Expertise in designing, configuring, and utilizing ML Engineering platforms like Sagemaker, MLFlow, Kubeflow, or other relevant platforms - Familiarity with Big data technologies including Hive, Spark, Hadoop, and queuing systems like Apache Kafka/Rabbit MQ/AWS Kinesis - Ability to quickly adapt to new technologies, innovate in solution creation, and independently conduct POCs on emerging technologies - Proficiency in at least one Programming language such as PySpark, Python, Java, Scala, etc., and solid foundations in Data Structures - Hands-on experience in building metadata-driven, reusable design patterns for data pipeline, orchestration, and ingestion patterns (batch, real-time) - Experience in designing and implementing solutions on distributed computing and cloud services platforms (e.g., AWS, Azure, GCP) - Hands-on experience in constructing CI/CD pipelines and awareness of application monitoring practices **Additional Skills:** - AWS/Azure Solutions Architect certification with a comprehensive understanding of the broader AWS/Azure stack - Knowledge of DevOps CI/CD, data security, and experience in designing on cloud platforms - Willingness to travel to global offices as required to collaborate with clients or internal project teams **Perks & Benefits:** ZS provides a holistic total rewards package encompassing health and well-being, financial planning, annual leave, personal growth, and professional development. The organization offers robust skills development programs, various career progression options, internal mobility paths, and a collaborative culture that empowers individuals to thrive both independently and as part of a global team. ZS is committed to fostering a flexible and connected work environment that enables employees to combine work from home and on-site presence at clients/ZS offices for the majority of the week. This approach allows for the seamless integration of the ZS culture and innovative practices through planned and spontaneous face-to-face interactions. **Travel:** Travel is an essential aspect of working at ZS, especially for client-facing roles. Business needs dictate the priority for travel, and while some projects may be local, all client-facing employees should be prepared to travel as required. Travel opportunities provide avenues to strengthen client relationships, gain diverse experiences, and enhance professional growth through exposure to different environments and cultures. **Application Process:** Candidates must either possess or be able to obtain work authorization for their intended country of employment. To be considered, applicants must submit an online application, including a complete set of transcripts (official or unofficial). *Note: NO AGENCY CALLS, PLEASE.* For more information, visit [ZS Website](www.zs.com).,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

OneOrigin is an innovative company based in Scottsdale, AZ and Bangalore, India, dedicated to utilizing the power of AI to revolutionize the operations of Higher Education institutions. Their cutting-edge software solutions aim to streamline administrative processes, enhance student engagement, and facilitate data-driven decision-making for colleges and universities. As an AI Developer at OneOrigin in Bangalore, you will be responsible for creating and implementing advanced AI-driven solutions to empower intelligent products and optimize business operations across various domains. Working closely with data scientists, software engineers, product teams, and stakeholders from different departments, you will design scalable AI systems using cutting-edge machine learning techniques to address complex challenges. Your responsibilities will include designing, developing, and deploying sophisticated AI/ML models for real-world applications, leveraging AI frameworks like TensorFlow, PyTorch, Keras, and Hugging Face Transformers. You will also explore Generative AI applications, manipulate large datasets using tools such as Pandas, NumPy, SQL, and Apache Spark, and deploy scalable AI services using Docker, FastAPI/Flask, and Kubernetes. Collaboration with cross-functional teams to integrate AI solutions into products, drive experimentation, implement automation opportunities, and stay updated on AI advancements will be key aspects of your role. You are required to have a Bachelor's degree in Computer Science, Engineering, or a related field, along with 5-8 years of hands-on experience in AI/ML development. Proficiency in Python, experience with Deep Learning, NLP, Computer Vision projects, and Cloud platforms (AWS, GCP, Azure) is essential. Your strong problem-solving skills, communication abilities, and expertise in API development will be crucial for successfully tackling complex challenges and collaborating effectively across diverse teams and departments. Join OneOrigin in their mission to make AI accessible and impactful for higher education institutions!,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

haryana

On-site

As an AI Developer specializing in Large Language Models (LLMs) at RMgX, a Gurgaon based digital product innovation & consulting firm, your role involves designing, developing, and deploying AI solutions using LLMs such as GPT, LLaMA, Claude, or Mistral. You will fine-tune and customize pre-trained LLMs for business-specific use cases, build and maintain NLP pipelines for classification, summarization, semantic search, etc., and create and manage vector database pipelines using tools like Milvus and Pinecone. Collaboration with cross-functional teams to integrate LLM-based features into applications and analyzing model performance to enhance outcomes will be key responsibilities. To excel in this role, you should have at least 12 years of experience in AI/ML development with a specific focus on NLP and LLM-based solutions. Proficiency in Python and AI/ML libraries such as HuggingFace Transformers, LangChain, PyTorch, TensorFlow, etc., is essential. Additionally, practical experience in working with closed-source models via APIs (e.g., OpenAI, Gemini), understanding of prompt engineering, embeddings, and vector databases like FAISS, Milvus, or Pinecone, and deploying models using REST APIs, Docker, and cloud platforms (AWS/GCP/Azure) are required. Familiarity with MLOps and version control tools (Git, MLflow, etc.), as well as knowledge of LLMOps platforms like LangSmith, Weights & Biases, will be advantageous. A Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field is preferred. Strong problem-solving skills, attention to detail, and the ability to work in an agile environment are crucial for success in this role. In return for your expertise and dedication, RMgX offers various perks and benefits, including flexible working hours, weekends off, health insurance, personal accident insurance, BYOD (Bring Your Own Device) benefit, and a laptop buyback scheme. Join us at RMgX to contribute to the creation of elegant, data-driven digital solutions for complex business problems and make a meaningful impact through your AI development skills.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are an experienced AI Application Architect responsible for leading the design and integration of AI/ML capabilities into enterprise applications. Your main objective is to architect intelligent, scalable, and reliable AI solutions that are in line with both business goals and technical strategies. You will work closely with data scientists, ML engineers, and application developers to ensure seamless end-to-end solutions. Additionally, you will be required to select and implement appropriate AI frameworks, APIs, LLMs, and infrastructure tools, as well as drive architecture decisions related to GenAI, NLP, CV, predictive analytics, and agentic AI systems. You will also establish MLOps pipelines for training, testing, and deploying models at scale while ensuring compliance with AI ethics, privacy laws, and data governance policies. Furthermore, you will evaluate emerging technologies, tools, and platforms for enterprise use and act as a technical advisor to leadership on AI opportunities and risks. In terms of required skills, you should have a strong background in AI/ML architecture and solution design, along with hands-on experience in ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Proficiency in LLMs and generative AI tools like OpenAI, Azure OpenAI, LangChain, and Hugging Face is necessary. A solid programming background in Python (FastAPI, Flask) and familiarity with Java/Node.js are essential. Experience with cloud platforms (AWS/GCP/Azure) and ML toolkits like SageMaker, Azure ML, and Vertex AI is also crucial. Additionally, a good understanding of microservices, REST APIs, GraphQL, and event-driven architecture is required. Knowledge of vector databases such as Pinecone, FAISS, Chroma, or Weaviate, as well as proficiency with CI/CD, Docker, Kubernetes, MLflow, Airflow, or similar tools, is expected. Preferred qualifications include experience with multi-agent systems, LangChain, Autogen, or Agentic AI frameworks, familiarity with data governance, model drift detection, and performance monitoring, and prior experience in industries like BFSI, Retail, Healthcare, or Manufacturing. Education-wise, a Bachelor's or master's degree in computer science, Artificial Intelligence, Data Science, or a related field is required. Certifications in AI/ML, cloud (AWS/GCP/Azure), or MLOps are considered a plus.,

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

Job Summary: We are seeking a talented and driven Machine Learning Engineer to design, build, and deploy ML models that solve complex business problems and enhance decision-making capabilities. You will work closely with data scientists, engineers, and product teams to develop scalable machine learning pipelines, deploy models into production, and continuously improve their performance. Key Responsibilities: Design, develop, and deploy machine learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Collaborate with data scientists to prepare and preprocess large-scale datasets for training and evaluation. Implement and optimize machine learning pipelines and workflows using tools like MLflow, Airflow, or Kubeflow. Integrate models into production environments and ensure model performance, monitoring, and retraining. Conduct A/B testing and performance evaluations to validate model accuracy and business impact. Stay up-to-date with the latest advancements in ML/AI research and tools. Write clean, efficient, and well-documented code for reproducibility and scalability. Requirements: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field. Strong knowledge of machine learning algorithms, data structures, and statistical methods. Proficient in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data manipulation libraries (e.g., pandas, NumPy) and visualization tools (e.g., Matplotlib, Seaborn). Familiarity with cloud platforms (AWS, GCP, or Azure) and model deployment tools. Experience with version control systems (Git) and software engineering best practices. Preferred Qualifications: Experience in deep learning, natural language processing (NLP), or computer vision. Knowledge of big data technologies like Spark, Hadoop, or Hive. Exposure to containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines. Familiarity with MLOps practices and tools.

Posted 1 week ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

About the Role: Acrocede Technologies Pvt Ltd is looking for a highly experienced AI/ML Architect to be deployed to our client Techversant , a global leader in digital transformation and technology services. This is a strategic, client-facing leadership role where you’ll lead the design and delivery of end-to-end AI solutions across cutting-edge domains. Work Mode : Remote or Onsite, depending on candidate location What You'll Do Work directly with clients to understand business problems and propose AI-driven solutions Architect and lead end-to-end AI/ML systems – from data ingestion to model deployment Guide AI/ML model selection, training, testing, and integration Build and mentor a high-performing AI team Drive delivery excellence using Agile methodologies and MLOps best practices Key Skills Required Strong expertise in AI/ML/DL , including models like CNN, RNN, Transformers, LLMs Programming: Python, PyTorch, TensorFlow, Scikit-learn, Hugging Face AI architecture design: microservices, APIs, cloud-native stacks Experience with AWS, Azure, or GCP ML services Familiarity with vector databases , MLOps tools (MLflow, Kubeflow, etc.), and data engineering (Spark, Kafka, Databricks) Excellent stakeholder communication and leadership skills Cloud certifications (AWS/GCP/Azure AI) preferred Good to Have Experience with Generative AI / LLMs / RAG / LangChain / Stable Diffusion Prompt engineering, fine-tuning, RLHF knowledge Previous contributions to research, publications, open-source, or AI certifications Understanding of explainable AI , ethical AI, and compliance

Posted 1 week ago

Apply

3.0 - 8.0 years

8 - 12 Lacs

Hyderabad

Work from Office

About the Role: We are looking for a highly skilled AI/ML Developer to join the core product team of QAPilot.io. The ideal candidate should come from a product-based or AI-first company, with a strong academic background from institutes like IITs, NITs, IIITs, or other Tier-1 engineering colleges. You will work on real-world AI problems related to test automation, software quality, and predictive engineering. Key Responsibilities: Design, build, and deploy machine learning models for intelligent QA automation Work on algorithms for test case optimization, bug prediction, pattern recognition, and data-driven QA insights Apply techniques from supervised/unsupervised learning, NLP, and deep learning Integrate ML models into the product using scalable and production-ready code Continuously improve model performance through experimentation and feedback loops Collaborate with full-stack developers, product managers, and QA experts Explore LLMs, transformers, and generative AI for advanced test data generation and analysis Required Skills & Qualifications: B.Tech / M.Tech / MS in Computer Science, Data Science, or related fields from IIT/NIT/IIIT or other top-tier institutes 3+ years of experience as an AI/ML Developer, preferably in product or AI-centric companies Strong proficiency in Python, ML libraries (scikit-learn, TensorFlow, PyTorch) Experience in NLP, LLMs, or generative AI (preferred) Hands-on with ML lifecycle: data wrangling, model training, evaluation, and deployment Familiarity with MLOps tools like MLFlow, Docker, Airflow, or cloud platforms (AWS/GCP) Prior exposure to software testing, DevOps, or developer tooling is a plus Strong analytical skills, attention to detail, and curiosity to solve open-ended problems Portfolio, GitHub, or project links demonstrating AI/ML expertise are desirable Why Join QAPilot.io: Work on an innovative AI product transforming the software QA ecosystem Join a high-impact, product-oriented engineering culture Solve challenging AI problems with real user value Collaborate with top talent from the tech and AI ecosystem Competitive salary, learning-focused environment, and growth opportunities To Apply: Please send your updated resume and any supporting links (GitHub, projects, publications).

Posted 1 week ago

Apply

3.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Role Summary 1. Demonstrate solid proficiency in Python development, writing clean, maintainable code. 2. Collaborator in the design and implementation of AI-driven applications leveraging large language models (LLMs). 3. Develop and maintain Django-based RESTful APIs to support backend services. 4. Integrate with LLM provider APIs (e.g., GPT, Claude, Cohere) and agent frameworks (LangChain, AgentStudio). 5. Build and optimize data pipelines for model training and inference using Pandas, NumPy, and Scikit-learn. 6. Ensure robust unit and integration testing via pytest to maintain high code quality. 7. Participate in agile ceremonies, contributing estimations, design discussions, and retrospectives. 8. Troubleshoot, debug, and optimize performance in multi-threaded and distributed environments. 9. Document code, APIs, and data workflows in accordance with software development best practices. 10. Continuously learn and apply new AI/ML tools, frameworks, and cloud services. Key Responsibilities 1. Write, review, and optimize Python code for backend services and data science workflows. 2. Design and implement Django REST APIs, ensuring scalability and security. 3. Integrate LLMs into applications: handle prompt construction, API calls, and result processing. 4. Leverage agent frameworks (LangChain, AgentStudio) to orchestrate complex LLM workflows. 5. Develop and maintain pytest suites covering unit, integration, and end-to-end tests. 6. Build ETL pipelines to preprocess data for model training and feature engineering. 7. Work with relational databases (PostgreSQL) and vector stores (FAISS, Weaviate, Milvus). 8. Containerize applications using Docker and deploy on Kubernetes or serverless platforms (AWS, GCP, Azure). 9. Monitor and troubleshoot application performance, logging, and error handling. 10. Collaborate with data scientists to deploy and serve ML models via FastAPI or vLLM. 11. Maintain CI/CD pipelines for automated testing and deployment. 12. Engage in technical learning sessions and share best practices across the team. Desired Skills & Qualifications - 1–3 years of hands-on experience in Python application development. - Proven pytest expertise, with a focus on test-driven development. - Practical knowledge of Django (or FastAPI) for building RESTful services. - Experience with LLM APIs (OpenAI, Anthropic, Cohere) and prompt engineering. - Familiarity with at least one agent framework (LangChain, AgentStudio). - Working experience in data science libraries: NumPy, Pandas, Scikit-learn. - Exposure to ML model serving tools (MLflow, FastAPI, vLLM). - Experience with container orchestration (Docker, Kubernetes, Docker Swarm). - Basic understanding of cloud platforms (AWS, Azure, or GCP). - Knowledge of SQL and database design; familiarity with vector databases. - Eagerness to learn emerging AI/ML technologies and frameworks. - Excellent problem-solving, debugging, and communication skills. Education & Attitude - Bachelor’s or Master’s in Computer Science, Data Science, Statistics, Mathematics, or related field. - Growth-mindset learner: proactive in upskilling and sharing knowledge. - Strong collaboration ethos and adaptability in a fast-paced AI environment.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Lead Sofware Engineer - Simulations(with ML Background) - Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore/Pune A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief We are seeking a Lead Software Engineer with deep experience in simulation systems and a solid foundation in machine learning to lead the architecture and development of our next-generation simulation and digital twin platforms. This role blends technical leadership with hands-on engineering, guiding the design of simulation systems that model real-world environments and power intelligent, ML-driven decision-making. You’ll be responsible for leading simulation efforts that generate actionable insights, synthetic data, and predictive capabilities — while also influencing how ML models are built, trained, and deployed within these environments. What I'll be doing – your accountabilities/responsibilities? Architect and lead simulation platforms that model complex systems across time, agents, and interactions. Bridge simulation systems with ML workflows: data generation, model training/testing, reinforcement learning, etc. Define system architecture, technology choices, and software patterns for scalable, high-performance simulation and ML integration. Collaborate with ML engineers, data scientists, and domain experts to enable closed-loop simulations for forecasting, optimization, and control. Lead cross-functional engineering efforts across simulation, data, and ML infrastructure teams. Mentor engineers across disciplines, setting technical standards and fostering a high-performance culture. Own end-to-end delivery of simulation-enabled intelligent systems — from ideation to production. Develop and optimize algorithms for discrete-event simulation, agent-based modeling. Support scenario testing, what-if analysis, and optimization workflows using simulation outputs. Ensure models are modular, extensible, and easily integrated with external services/platforms (e.g., dashboards, analytics, AI agents). Foundational /Must Have Skills Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field. 12+ years of software engineering experience, with 3+ years leading teams or major initiatives. Demonstrated experience in simulation platforms (e.g., SimPy, Mesa, CARLA, Unity, AnyLogic, or custom-built engines). Strong programming skills in Python (required); additional languages like C++, Rust, or Julia are a plus. Deep understanding of ML principles — model pipelines, data engineering, evaluation, and deployment. Experience working with synthetic data, simulators for ML training/testing, or RL environments (e.g., OpenAI Gym, Isaac Sim, Unity ML-Agents). Solid grasp of software architecture, distributed systems, and CI/CD pipelines. Preferred To Have Experience with reinforcement learning, decision-making under uncertainty, or control systems. Familiarity with ML frameworks (e.g., PyTorch, TensorFlow, scikit-learn) and MLOps tools (e.g. MLflow, Weights & Biases). Background in simulation-heavy domains such as robotics, logistics, autonomous systems. Familiarity with cloud infrastructure (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes). Strong communication and cross-functional leadership skills — able to align engineers, ML teams, and product stakeholders. As a performance-oriented company, we strive to always recruit the best person for the job – regardless of gender, age, nationality, sexual orientation or religious beliefs. We are proud of our diversity and see it as a genuine source of strength for building high-performing teams. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

This role is for one of Weekday's clients Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are looking for an experienced Data Scientist with a strong background in the CPG (Consumer Packaged Goods) or Retail domain , focusing on category and product analytics , forecasting , and machine learning workflows . The ideal candidate will possess advanced analytical skills, business acumen, and hands-on expertise in modern data science tools and platforms such as Python, SQL, Databricks, PySpark , and CI/CD ML pipelines . As a Data Scientist, you will be responsible for generating actionable insights across product assortment, category performance, sales trends, and customer behaviors. Your work will directly influence decision-making for new product launches , inventory optimization , campaign effectiveness , and category planning , enabling our teams to enhance operational efficiency and drive business growth. Key Responsibilities Category & Product Analytics: Conduct deep-dive analysis into product assortment, SKU performance, pricing effectiveness, and category trends. Evaluate new product launches and provide recommendations for optimization based on early performance indicators. Sales Data Analysis & Forecasting: Analyze historical and real-time sales data to identify key growth drivers, seasonality, and demand patterns. Build statistical and ML-based models to forecast demand and category-level performance at multiple aggregation levels. Customer Analytics (Nice to Have): Analyze loyalty program data and campaign performance metrics to assess customer retention and ROI of promotions. ML Model Development & Deployment: Design, build, and deploy machine learning models using Python and PySpark to address business problems in forecasting, product clustering, and sales optimization. Maintain and scale CI/CD pipelines for ML workflows using tools like MLflow, Azure ML, or similar. Data Engineering and Tooling: Develop and optimize data pipelines on Databricks and ensure reliable data ingestion and transformation for analytics use cases. Use SQL and PySpark to manipulate and analyze large datasets with performance and scalability in mind. Visualization & Stakeholder Communication: Build impactful dashboards using Power BI (preferred) to enable self-service analytics for cross-functional teams. Translate data insights into clear, compelling business narratives for leadership and non-technical stakeholders. Collaboration & Strategic Insights: Work closely with category managers, marketing, and supply chain teams to align data science initiatives with key business objectives. Proactively identify opportunities for innovation and efficiency across product and sales functions. Required Skills & Qualifications Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related quantitative field. 5+ years of experience in applied data science, preferably in CPG/Retail/FMCG domains. Proficient in Python, SQL, Databricks, and MLflow. Experience with PySpark and Azure ML is a strong plus. Deep experience with time-series forecasting, product affinity modeling, and campaign analytics. Familiarity with Power BI for dashboarding and visualization. Strong storytelling skills, with the ability to explain complex data-driven insights to senior stakeholders. Solid understanding of challenges and opportunities within the retail and FMCG space. Ability to work independently as well as in cross-functional teams in a fast-paced environment.

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Greetings ! One our our client TOP MNC Giant looking for GEN AI and Machine Learning Engineer's Important Notes: Please share only those profiles who can join immediately or within 7 days. Base Locations: Gurgaon and Bengaluru (hybrid setup 3 days work from office). Role : Associate and Sr Associate L1/L2 (Multiple Positions) SKILLS :  Bachelor's or master’s degree in Computer Science, Data Science, Engineering, or a related field.  Experience on Agentic AI/ Frameworks  Strong programming skills in languages such as Python, SQL/NoSQL etc.  Build analytical approach based on business requirements, then develop, train, and deploy machine learning models and AI algorithms  Exposure to GEN AI models such as OpenAI, Google Gemini, Runway ML etc.  Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as TensorFlow, PyTorch, Scikit-learn, OpenCV and/or Keras.  Knowledge of math, probability, and statistics.  Familiarity with a variety of Machine Learning, NLP, and deep learning algorithms.  Exposure in developing API using Flask/Django.  Good experience in cloud infrastructure such as AWS, Azure or GCP  Exposure to Gen AI, Vector DB/Embeddings, LLM (Large language Model) GOOD TO HAVE : Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc Exposure in HTML, CSS, Javascript/JQuery, Node.js, Angular/React Experience in Flask/Django is a bonus RESPONSIBILITIES : Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions.  Develop, implement, and deploy AI/ML solutions.  Preprocess and analyze large datasets to identify patterns, trends, and insights.  Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability.  Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc.  Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed.  Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement. INTERESTED CANDIDATES PERFECT MATCH TO THE JD AND WHO CAN JOIN ASAP ONLY DO APPLY ALONG WITH BELOW MENTIONED DETAILS : Total exp : Relevant exp in AI/ ML : Applying for Gurgaon and Bengaluru : Open for Hybrid : Current CTC : Expected CTC : Can join ASAP : Will call you once we receive your updated profile along with above mentioned details. Thanks, Venkat Solti solti.v@anlage.co.in

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies