Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You are an experienced MERN (MongoDB, Express.js, React, Electron.js, and Node.js) Stack Developer with a solid background in web development and expertise in AWS, Docker, and OpenAI. Your primary responsibility will involve designing and implementing innovative web applications while incorporating AI-powered features into the products. Your key responsibilities include developing and maintaining high-quality web applications using the MERN stack, collaborating with designers and fellow developers to create user-friendly interfaces, designing efficient database schemas in MongoDB, writing server-side logic in Node.js, Electron.js/Express.js, crafting responsive front-end components with React, and seamlessly integrating third-party APIs and libraries into applications. You will also focus on ensuring user data security and privacy, utilizing code versioning tools, integrating cloud message APIs and push notifications, collaborating with cross-functional teams for interface design, deploying applications on AWS cloud infrastructure for scalability and reliability, containerizing applications with Docker, writing clean and well-documented code, troubleshooting technical issues, staying updated with emerging technologies, and participating in code reviews to provide feedback to team members. To qualify for this role, you should hold a Bachelor's degree in computer science or a related field (or possess equivalent work experience), demonstrate proven experience as a MERN Stack Developer with a robust portfolio of developed web applications, exhibit expertise in MongoDB, Express.js, React, Electron.js, and Node.js, showcase proficiency in AWS services such as EC2, S3, Lambda, and CloudFormation, have experience in containerization and orchestration using Docker and Kubernetes, be familiar with natural language processing and AI technologies (especially OpenAI), possess a solid understanding of RESTful API design, demonstrate strong problem-solving and debugging skills, exhibit excellent teamwork and communication abilities, and showcase self-motivation and the ability to work independently.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You are looking for a DevOps Technical Lead who will play a crucial role in leading the development of an Infrastructure Agent powered by Generative AI (GenAI) technology. In this role, you will be responsible for designing and implementing an intelligent Infra Agent that can handle provisioning, configuration, observability, and self-healing autonomously. Your key responsibilities will include leading the architecture and design of the Infra Agent, integrating various automation frameworks to enhance DevOps workflows, automating infrastructure provisioning and incident remediation, developing reusable components and frameworks using Infrastructure as Code (IaC) tools, and collaborating with AI/ML engineers and SREs to create intelligent infrastructure decision-making logic. You will also be expected to implement secure and scalable infrastructure on cloud platforms such as AWS, Azure, and GCP, continuously improve agent performance through feedback loops, telemetry, and model fine-tuning, drive DevSecOps best practices, compliance, and observability, as well as mentor DevOps engineers and work closely with cross-functional teams. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 8 years of experience in DevOps, SRE, or Infrastructure Engineering. You must have proven experience in leading infrastructure automation projects, expertise with cloud platforms like AWS, Azure, GCP, and deep knowledge of tools such as Terraform, Kubernetes, Helm, Docker, Jenkins, and GitOps. Hands-on experience with LLMs/GenAI APIs, familiarity with automation frameworks, and proficiency in programming/scripting languages like Python, Go, or Bash are also required. Preferred qualifications for this role include experience in building or fine-tuning LLM-based agents, contributions to open-source GenAI or DevOps projects, understanding of MLOps pipelines and AI infrastructure, and certifications in DevOps, cloud, or AI technologies.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be joining our team as a Senior AI Automation & Systems Lead, where you will have the opportunity to partner directly with the founder to build next-gen operating systems. Your role will involve developing innovative operating systems to enhance performance and efficiency across various departments. Your responsibilities will include architecting automation workflows using tools like n8n, Zapier, Make, and custom scripts. You will design and maintain scalable Airtable infrastructure for data ops and automation backends, build and deploy AI agents using platforms such as OpenAI and LangChain, orchestrate and govern agents via a Multi-Agent Control Panel (MCP), and integrate cross-platform APIs like Google Workspace, Slack, CRMs, ERPs, Notion, Ads, and more. As a Senior AI Automation & Systems Lead, you will identify, evaluate, and implement high-leverage automation across business and media operations, define and track automation KPIs, build internal dashboards, and lead optimization cycles. Additionally, you will have the opportunity to mentor junior developers and document systems for easy onboarding and handover. To qualify for this role, you should have a minimum of 4-8 years of hands-on experience in automation, backend systems, or AI operations. You must have a proven track record in deploying autonomous agents in real-world applications, deep knowledge of automation tools like n8n and Airtable, REST APIs, and automation logic. Understanding of API architecture, familiarity with MCPs, agent orchestration, and lifecycle governance is essential. Experience with platforms such as LangChain, Claude, Pinecone, Firestore, OpenRouter, or RAG systems will be advantageous. Strong communication skills are necessary to effectively translate technical concepts into business outcomes. The required skills for this role include a strong background in automation, backend systems, and AI operations. Preferred skills include familiarity with various automation tools and platforms, as well as experience in mentoring and documentation. We are an equal opportunity employer committed to diversity and inclusivity in our hiring practices. Joining our team will give you the opportunity to work directly with the Founder to reimagine an industry, build systems that impact factories, media studios, and global marketing pipelines, and work with a dynamic team in a fast-paced environment. If you are excited about creating systems that think faster than people and building tech that drives creativity and execution at scale, we would like to hear from you. To apply, please send us your updated professional profile, work samples showcasing automation flows, agent deployments, dashboards, etc., and a note on how you would go about automating 50% of an ad agency and packaging unit if you had to do it tomorrow. Location: Gurugram,
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Instead Instead is a tax platform designed to help taxpayers and tax professionals collaborate to save money on taxes. As the first company in decades to receive IRS approvals to E-file 1040, 1041, 1120, 1120S, and 1065 — we're re-inventing a complex category. Founded in 2023, Instead combines LLMs with tax law to make tax management a continuous, proactive process rather than a dreaded annual deadline. Instead's investors include Sarah Guo from Conviction (conviction.com), IRIS (irisglobal.com) the largest tax software provider in the UK and many of our partners and customers who believe in our mission and vision. The Instead team comprises talented leaders from leading tax, financial services and fintech companies — Gusto, Intuit, Zenefits, Thomson Reuters, Wolters Kluwer — as well as top tax & accounting firms such as PwC, BDO, RSM, and KPMG. Instead was a 2024 Innovation Award Finalist in CPA Practice Advisor. Instead's CEO, Andrew Argue, is a CPA and has been named Top 100 Most Influential People in the Accounting Profession twice - Ones To Watch and CPA Practice Advisors 20 under 40. About the role In this role, you will get the chance to build innovative AI-powered tax software alongside a dynamic team. You'll work on both our customer-facing product Instead (instead.com) and our backend internal tooling that empowers our teams to build cutting-edge features. This role is at the forefront of leveraging AI to drive innovation across our platform, working on exciting high-value features for our customers. If you're passionate about working on real production use cases utilizing LLMs and want to contribute to groundbreaking AI applications in a fast-paced environment, you'll thrive as you help us drive innovation in tax technology. Here is the tech you will get the change to utilize: Front End: Vue, Nuxt, TypeScript, Tailwind CSS Back End: Go, Docker on AWS ECS/Fargate, PostgreSQL on AWS RDS AI: Cursor with Sonnet 3.5, Langchain, Helicone, Gemini, OpenAI's API utilizing 4o (still exploring best use cases for o1 pro) and any other piece of AI tech that we can get our hands on What you'll do Ship full-stack product features end to end for our live customer platform Make key product improvements based on customer feedback and usage analytics Understand and solve critical product bugs across the full stack Build and integrate components for infrastructure, supporting production-level inference and advanced prompt engineering Create tools and internal platforms to enhance the productivity and capabilities of Instead's teams Additional projects as needed by the internal engineering team and US-based product teams What you'll need Proficiency in full-stack development, with a strong understanding of web frameworks, backend systems, and cloud infrastructure Experience building backend systems and infrastructure that can support live products 3+ years of software development experience High attention to detail Fast learner who enjoys working in a fast-paced, innovative environment Nice to have A track record of working on full-stack projects, end-to-end Experience with AI/ML frameworks and prompt engineering Experience programming using AI copilots such as Cursor, GitHub Copilot, ChatGPT, Claude, Windsurf, etc. Experience with the technologies in our current stack Experience building customer-facing products at scale Why join us Ability to work with cutting edge AI in all stages of the software development lifecycle Work on a cutting-edge tax tech platform that's transforming the industry Be part of a collaborative, mission-driven team Competitive compensation and benefits Growth opportunities in product development, compliance, and technology Opportunity to work with cutting-edge AI technology in production environments Equal Opportunity Employer - M/F/D/V As a global business, we rely on diversity of culture and thought to deliver on our goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. We trust our team with sensitive information, so all candidates who receive and accept employment offers must complete a background check before joining us.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an AI Senior Developer in the Sales Cloud team at SAP, your primary responsibility will be to develop product features within the Sales portfolio, integrating AI-driven solutions, and helping the team deliver services on the cloud-native stack on a continuous basis. You will work with a highly motivated and talented set of colleagues, translating business requirements into scalable and performant technical solutions. Your role will involve designing, coding, testing, and assuring the quality of complex AI-powered product features within the Customer Experience & Sales portfolio. To excel in this role, you should bring at least 4-9 years of software engineering experience, with significant exposure to building products on the cloud-native stack. Strong knowledge of Computer Science fundamentals is essential, along with proficiency in Python/Java; Spring Boot experience is preferred. Familiarity with JavaScript, experience with Angular or ABAP, and hands-on experience with Git and CI/CD pipelines are desired qualifications. You should have a proven track record of writing production-grade code for enterprise-scale systems and experience with SQL databases; knowledge of NoSQL and event streaming (e.g., Kafka) is a bonus. Additionally, experience working with LLMs and generative AI frameworks (e.g., OpenAI, Hugging Face, etc.) is valuable. Demonstrated ability to build or integrate AI-driven features into enterprise applications is key, and familiarity with SAP software (e.g., SAP S/4HANA, SAP Business Technology Platform) is an asset. Knowledge of Agentic AI and frameworks is also beneficial. Strong collaboration and communication skills are crucial for success in this role, along with experience using tools like JIRA for tracking tasks and bugs. Joining the SAP CX team means positively impacting how businesses are run globally. SAP CX is a leader in enterprise cloud applications for cloud CRM and DXP solutions, empowering organizations to understand their customers better and engage with relevance and personalization. At SAP, we believe in a culture of inclusion, focus on health and well-being, and flexible working models to ensure that everyone, regardless of background, feels included and can perform at their best. We are committed to creating a better and more equitable world by unleashing all talent and investing in our employees" development. If you have what it takes to be part of a dynamic team that drives the company's product portfolio forward, we want to hear from you. SAP is proud to be an equal opportunity workplace and an affirmative action employer. We provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and require accommodation or special assistance, please contact the Recruiting Operations Team at Careers@sap.com.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
ValGenesis is a leading digital validation platform provider for life sciences companies, with its suite of products being utilized by 30 of the top 50 global pharmaceutical and biotech companies. These products play a crucial role in achieving digital transformation, ensuring total compliance, and enabling manufacturing excellence and intelligence throughout the product lifecycle. As a part of the team at ValGenesis, you will collaborate with Business Analysts/Product Owners, Developers, SDETs, and others in the development of enterprise software applications. Your responsibilities will include understanding business requirements, participating in the complete development life cycle, designing and implementing test strategies, test plans, test cases, test automation, and execution. You will focus on validating the functionality, performance, and reliability of AI/ML models, evaluating various aspects such as accuracy, bias, robustness, and safety. Additionally, you will track model performance, identify issues, and provide regular reports on quality and reliability. Crafting and refining the prompts or instructions to guide the development of Generative AI models will also be a key aspect of your role. To excel in this position, you should have 2 to 6 years of experience in enterprise software product development lifecycle/phases, along with a solid blend of software testing expertise and a deep understanding of AI/ML concepts, tools, and frameworks. Proficiency in Generative AI model algorithms, training, deployment processes, evaluating model performance metrics, and assessing models for bias, fairness, and ethical concerns are essential requirements. Experience in Python programming and tools like OpenAI, TensorFlow, PyTorch, DialogFlow, and integrating testing processes into CI/CD pipelines will be advantageous. ValGenesis has been at the forefront of disrupting the life sciences industry since 2005, introducing the world's first digital validation lifecycle management system. The company continues to innovate and expand its offerings, driving an end-to-end digital transformation platform. Joining the ValGenesis team means being part of a customer-focused organization that values open communication, mutual support, and a commitment to excellence. The company fosters a culture of innovation, encourages personal growth, and maintains a relentless pursuit of market leadership. At ValGenesis, the Chennai, Hyderabad, and Bangalore offices operate onsite five days a week, emphasizing the importance of in-person interaction and collaboration for fostering creativity, building a sense of community, and ensuring future success as a company.,
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview We are looking for a high-impact Product Manager who thrives at the intersection of technology and pharma/life sciences . This role demands a sharp strategic thinker with hands-on technical depth , product ownership mindset , and a solid grasp of pharma domain knowledge — from primary market research (PMR) insights , competitive intelligence (CI) , to brand strategy . If you can translate brand/medical/commercial objectives into robust, scalable product solutions using AWS-native architectures , ML/GenAI models , and modern DevOps practices , you belong here. Key Responsibilities Product Leadership Own the end-to-end product lifecycle from discovery to launch across pharma/life sciences use cases. Translate unmet market and brand needs into differentiated, scalable, and user-centric product solutions. Prioritize features across platform modules by aligning commercial, medical, and data science needs. Partner with commercial, brand, and medical teams to translate PMR and CI into actionable product features. Technical & Platform Strategy Drive architectural discussions and product decisions around AWS cloud infrastructure , including Glue , Athena , Data Lake , S3 , Lambda , and Step Functions . Collaborate with engineering to ensure CI/CD pipelines , Docker , Kubernetes , and ML Ops practices are integrated for faster product iterations. Enable delivery of GenAI capabilities in the platform — from document intelligence, medical NLP, summarization to insight generation. Data & AI Productization Lead data strategy for ingesting, cleaning, and transforming EMR, Claims, HCP/HCO, and RWD data using PySpark , SQL , and data pipelines . Build roadmap around ML/GenAI-driven use cases: e.g., treatment pathway prediction, KOL segmentation, site recommendation, competitive tracking. Collaborate with data scientists to deploy models in production using APIs and cloud-native services. Market & Domain Expertise Leverage deep knowledge of pharma workflows (Medical Affairs, Market Access, Clinical Dev, Commercial Ops). Map out patient journeys, treatment landscapes, and brand objectives into platform features. Convert PMR data and CI signals into competitive positioning and product differentiation. Required Qualifications 6–8 years of experience in product management or technical product ownership. Strong experience in pharma or life sciences industry — ideally in commercial, medical, or clinical tech products. Proven hands-on experience with AWS cloud architecture , especially Glue, Athena, Data Lake, Step Functions. Proficient in Python , SQL , PySpark , and working knowledge of ML modeling & GenAI frameworks (LangChain, OpenAI, HuggingFace, etc.) . Strong grasp of DevOps pipelines (CI/CD, GitHub Actions/GitLab, Terraform, Docker, K8s) . Strong understanding of data engineering concepts — ingestion, normalization, feature engineering, and ML pipeline orchestration. Familiarity with primary market research methodologies , CI tools , and brand strategy in pharma. Preferred Skills Prior experience building SaaS or platform products in regulated industries. Knowledge of data privacy, HIPAA, and compliance frameworks.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Solution Architect This is an opportunity for an experienced Solution Architect to help us define the high-level technical architecture and design for a key data analytics and insights platform that powers the personalised customer engagement initiatives of the business You’ll define and communicate a shared technical and architectural vision of end-to-end designs that may span multiple platforms and domains Take on this exciting new challenge and hone your technical capabilities while advancing your career and building your network across the bank We're offering this role at vice president level What you'll do We’ll look to you to influence and promote the collaboration across platform and domain teams on the solution delivery. Partnering with platform and domain teams, you’ll elaborate the solution and its interfaces, validating technology assumptions, evaluating implementation alternatives, and creating the continuous delivery pipeline. You’ll also provide analysis of options and deliver end-to-end solution designs using the relevant building blocks, as well as producing designs for features that allow frequent incremental delivery of customer value. On top of this, you’ll be: Owning the technical design and architecture development that aligns with bank-wide enterprise architecture principles, security standards, and regulatory requirements Participating in activities to shape requirements, validating designs and prototypes to deliver change that aligns with the target architecture Promoting adaptive design practices to drive collaboration of feature teams around a common technical vision using continuous feedback Making recommendations of potential impacts to existing and prospective customers of the latest technology and customer trends Engaging with the wider architecture community within the bank to ensure alignment with enterprise standards Presenting solutions to governance boards and design review forums to secure approvals Maintaining up-to-date architectural documentation to support audits and risk assessment The skills you'll need As a Solution Architect, you’ll bring expert knowledge of application architecture, and in business data or infrastructure architecture with working knowledge of industry architecture frameworks such as TOGAF or ArchiMate. You’ll also need an understanding of Agile and contemporary methodologies with experience of working in Agile teams. A certification in cloud solutions like AWS Solution Architect is desirable while an awareness of agentic AI based application architectures using LLMs like OpenAI and agentic frameworks like LangGraph, CrewAI will be advantageous. Furthermore, you’ll need: Strong experience in solution design, enterprise architecture patterns, and cloud-native applications including the ability to produce multiple views to highlight different architectural concerns A familiarity with understanding big data processing in the banking industry Hands-on experience in AWS services, including but not limited to S3, Lambda, EMR, DynamoDB and API Gateway An understanding of big data processing using frameworks or platforms like Spark, EMR, Kafka, Apache Flink or similar Knowledge of real-time data processing, event-driven architectures, and microservices Conceptual understanding of data modelling and analytics, machine learning or deep-learning models The ability to communicate complex technical concepts clearly to peers and leadership level colleagues
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
Role: Inbound Sales Manager- B2B SaaS Location: Remote Experience: 3+ years About Us: Fireflies.ai is the leading AI teammate for meetings, trusted by over 20 million users across 500,000+ organizations, ranging from fast-growing startups to Fortune 500 enterprises. We’re revolutionizing team collaboration by automating knowledge capture and repetitive tasks, enhancing productivity across industries like sales, project management, marketing, operations, and product development. With a valuation exceeding $1 billion, Fireflies is recognized as a category-defining platform and was named the 6th most popular AI platform by Ramp, joining the ranks of OpenAI and Midjourney. We’re building a world-class, global-first team and we believe in fostering diversity and innovation. Join us as we shape the future of work! Role Overview: As Inbound Sales Manager at Fireflies.ai, you will be responsible for managing the full sales cycle—from engaging inbound leads to closing deals. You’ll also play a crucial role in onboarding new clients, ensuring they have a seamless experience with our platform. This role requires a proactive, target-driven sales professional with excellent communication skills, strong ownership, and the ability to thrive in a remote-first environment. Key Responsibilities: Prospect Conversion: Engage with inbound leads to understand their needs, deliver tailored demos, and convert prospects into paying customers. Build strong relationships and position Fireflies’ value proposition effectively. Full Sales Cycle Management: Own the entire sales process, from lead qualification and product demos to proposal creation and deal closure. Target Achievement: Meet or exceed sales targets, focusing on demo-to-win rates and deal closure timelines. Customer Onboarding: Lead new customer onboarding, ensuring a smooth transition and proper setup to maximize product adoption. Be the first point of contact for new customers, addressing initial queries and troubleshooting issues. Cross-Functional Collaboration: Work closely with engineering, customer success, and product teams to resolve onboarding challenges and implement customer feedback. Qualifications: 4-8 years of experience in B2B sales, preferably with enterprise clients. Proven track record of exceeding sales targets in a fast-paced, target-driven environment. Exceptional verbal and written communication skills, with the ability to build trust and rapport with diverse clients. Strong organizational skills, with the ability to manage multiple sales opportunities and onboarding tasks simultaneously. Proficiency with tools like HubSpot, Salesforce, Slack, Stripe, and Google Suite. A self-starter who thrives in a fully remote environment, with high ownership and accountability. Flexible to work in PST/EST time zones. Core Values: Strong communicator who values overcommunication and candid feedback. Data-driven, customer-focused, and committed to continuous improvement. Embrace fast, incremental engineering cycles with a focus on design excellence and minimizing complexity. Take initiative, hold yourself accountable, and strive for 10% improvement each week. Perks & Benefits: Competitive compensation Remote-first, with flexibility to work from anywhere Opportunities for lateral growth and career advancement Paid time off and flexible leave policy A "no boss" culture that empowers ownership and autonomy Flexible working hours to suit your lifestyle LGBTQ+ friendly workplace International offsite opportunities to connect and recharge Tech reimbursements to support your work
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Principal Software Engineer – AI Location : Gurgaon (In-Office) Working Days : Monday to Saturday (2nd and 4th Saturdays are working) Working Hours : 10:30 AM – 8:00 PM Experience : 6–10 years of hands-on development in AI/ML systems, with deep experience in shipping production-grade AI products Apply at : careers@darwix.ai Subject Line : Application – Principal Software Engineer – AI – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how large sales and CX teams operate across India, MENA, and Southeast Asia. We build deeply integrated conversational intelligence and agent assist tools that enable: Multilingual speech-to-text pipelines Real-time agent coaching AI-powered sales scoring Predictive analytics and nudges CRM and telephony integrations Our clients include leading enterprises like IndiaMart, Bank Dofar, Wakefit, GIVA, and Sobha , and our product is deeply embedded in the daily workflows of field agents, telecallers, and enterprise sales teams. We are backed by top VCs and built by alumni from IIT, IIM, and BITS with deep expertise in real-time AI, enterprise SaaS, and automation. Role Overview We are hiring a Principal Software Engineer – AI to lead the development of advanced AI features in our conversational intelligence suite. This is a high-ownership role that combines software engineering, system design, and AI/ML application delivery. You will work across our GenAI stack—including Whisper, LangChain, LLMs, audio streaming, transcript processing, NLP pipelines, and scoring models—to build robust, scalable, and low-latency AI modules that power real-time user experiences. This is not a research role. You will be building, deploying, and optimizing production-grade AI features used daily by thousands of sales agents and managers across industries. Key Responsibilities 1. AI System Architecture & Development Design, build, and optimize core AI modules such as: Multilingual speech-to-text (Whisper, Deepgram, Google STT) Prompt-based LLM workflows (OpenAI, open-source LLMs) Transcript post-processing: punctuation, speaker diarization, timestamping Real-time trigger logic for call nudges and scoring Build resilient pipelines using Python, FastAPI, Redis, Kafka , and vector databases 2. Production-Grade Deployment Implement GPU/CPU-optimized inference services for latency-sensitive workflows Use caching, batching, asynchronous processing, and message queues to scale real-time use cases Monitor system health, fallback workflows, and logging for ML APIs in live environments 3. ML Workflow Engineering Work with Head of AI to fine-tune, benchmark, and deploy custom models for: Call scoring (tone, compliance, product pitch) Intent recognition and sentiment classification Text summarization and cue generation Build modular services to plug models into end-to-end workflows 4. Integrations with Product Modules Collaborate with frontend, dashboard, and platform teams to serve AI output to users Ensure transcript mapping, trigger visualization, and scoring feedback appear in real-time in the UI Build APIs and event triggers to interface AI systems with CRMs, telephony, WhatsApp, and analytics modules 5. Performance Tuning & Optimization Profile latency and throughput of AI modules under production loads Implement GPU-aware batching, model distillation, or quantization where required Define and track key performance metrics (latency, accuracy, dropout rates) 6. Tech Leadership Mentor junior engineers and review AI system architecture, code, and deployment pipelines Set engineering standards and documentation practices for AI workflows Contribute to planning, retrospectives, and roadmap prioritization What We’re Looking For Technical Skills 6–10 years of backend or AI-focused engineering experience in fast-paced product environments Strong Python fundamentals with experience in FastAPI, Flask , or similar frameworks Proficiency in PyTorch , Transformers , and OpenAI API/LangChain Deep understanding of speech/text pipelines, NLP, and real-time inference Experience deploying LLMs and AI models in production at scale Comfort with PostgreSQL, MongoDB, Redis, Kafka, S3 , and Docker/Kubernetes System Design Experience Ability to design and deploy distributed AI microservices Proven track record of latency optimization, throughput scaling, and high-availability setups Familiarity with GPU orchestration, containerization, CI/CD (GitHub Actions/Jenkins), and monitoring tools Bonus Skills Experience working with multilingual STT models and Indic languages Knowledge of Hugging Face, Weaviate, Pinecone, or vector search infrastructure Prior work on conversational AI, recommendation engines, or real-time coaching systems Exposure to sales/CX intelligence platforms or enterprise B2B SaaS Who You Are A pragmatic builder—you don’t chase perfection but deliver what scales A systems thinker—you see across data flows, bottlenecks, and trade-offs A hands-on leader—you mentor while still writing meaningful code A performance optimizer—you love shaving off latency and memory bottlenecks A product-focused technologist—you think about UX, edge cases, and real-world impact What You’ll Impact Every nudge shown to a sales agent during a live customer call Every transcript that powers a manager’s coaching decision Every scorecard that enables better hiring and training at scale Every dashboard that shows what drives revenue growth for CXOs This role puts you at the intersection of AI, revenue, and impact —what you build is used daily by teams closing millions in sales across India and the Middle East. How to Apply Send your resume to careers@darwix.ai Subject Line: Application – Principal Software Engineer – AI – [Your Name] (Optional): Include a brief note describing one AI system you've built for production—what problem it solved, what stack it used, and what challenges you overcame. If you're ready to lead the AI backbone of enterprise sales , build world-class systems, and drive real-time intelligence at scale— Darwix AI is where you belong.
Posted 1 week ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI Engineer – Voice, NLP, and GenAI Systems Location : Sector 63, Gurgaon – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM to 8:00 PM Experience : 2–6 years in AI/ML, NLP, or applied machine learning engineering Apply at : careers@darwix.ai Subject Line : Application – AI Engineer – [Your Name] About Darwix AI Darwix AI is India’s fastest-growing GenAI SaaS platform transforming how enterprise sales, field, and support teams engage with customers. Our suite— Transform+ , Sherpa.ai , and Store Intel —powers real-time multilingual voice analytics, AI nudges, coaching systems, and computer vision analytics for major enterprises across India, MENA, and Southeast Asia. We work with some of the largest names such as Aditya Birla Capital, Sobha, GIVA, and Bank Dofar. Our systems process thousands of daily conversations, live call transcripts, and omnichannel data to deliver actionable revenue insights and in-the-moment enablement. Role Overview As an AI Engineer , you will play a key role in designing, developing, and scaling AI and NLP systems that power our core products. You will work at the intersection of voice AI, natural language processing (NLP), large language models (LLMs), and speech-to-text pipelines. You will collaborate with product, backend, and frontend teams to integrate ML models into production workflows, optimize inference pipelines, and improve the accuracy and performance of real-time analytics used by enterprise sales and field teams. Key ResponsibilitiesAI & NLP System Development Design, train, fine-tune, and deploy NLP models for conversation analysis, scoring, sentiment detection, and call summarization. Work on integrating and customizing speech-to-text (STT) pipelines (e.g., WhisperX, Deepgram) for multilingual audio data. Develop and maintain classification, extraction, and sequence-to-sequence models to handle real-world sales and service conversations. LLM & Prompt Engineering Experiment with and integrate large language models (OpenAI, Cohere, open-source LLMs) for live coaching and knowledge retrieval use cases. Optimize prompts and design retrieval-augmented generation (RAG) workflows to support real-time use in product modules. Develop internal tools for model evaluation and prompt performance tracking. Productionization & Integration Build robust model APIs and microservices in collaboration with backend engineers (primarily Python, FastAPI). Optimize inference time and resource utilization for real-time and batch processing needs. Implement monitoring and logging for production ML systems to track drift and failure cases. Data & Evaluation Work on audio-text alignment datasets, conversation logs, and labeled scoring data to improve model performance. Build evaluation pipelines and create automated testing scripts for accuracy and consistency checks. Define and track key performance metrics such as WER (word error rate), intent accuracy, and scoring consistency. Collaboration & Research Work closely with product managers to translate business problems into model design requirements. Explore and propose new approaches leveraging the latest research in voice, NLP, and generative AI. Document research experiments, architecture decisions, and feature impact clearly for internal stakeholders. Required Skills & Qualifications 2–6 years of experience in AI/ML engineering, preferably with real-world NLP or voice AI applications. Strong programming skills in Python , including libraries like PyTorch, TensorFlow, Hugging Face Transformers. Experience with speech processing , audio feature extraction, or STT pipelines. Solid understanding of NLP tasks: tokenization, embedding, NER, summarization, intent detection, sentiment analysis. Familiarity with deploying models as APIs and integrating them with production backend systems. Good understanding of data pipelines, preprocessing techniques, and scalable model architectures. Preferred Qualifications Prior experience with multilingual NLP systems or models tuned for Indian languages. Exposure to RAG pipelines , embeddings search (e.g., FAISS, Pinecone), and vector databases. Experience working with voice analytics, diarization, or conversational scoring frameworks. Understanding of DevOps basics for ML (MLflow, Docker, GitHub Actions for model deployment). Experience in SaaS product environments serving enterprise clients. Success in This Role Means Accurate, robust, and scalable AI models powering production workflows with minimal manual intervention. Inference pipelines optimized for enterprise-scale deployments with high availability. New features and improvements delivered quickly to drive direct business impact. AI-driven insights and automations that enhance user experience and boost revenue outcomes for clients. You Will Excel in This Role If You Love building AI systems that create measurable value in the real world, not just in research labs. Enjoy solving messy, real-world data problems and working on multilingual and noisy data. Are passionate about voice and NLP, and constantly follow advancements in GenAI. Thrive in a fast-paced, high-ownership environment where ideas quickly become live features. How to Apply Email your updated CV to careers@darwix.ai Subject Line: Application – AI Engineer – [Your Name] (Optional): Share links to your GitHub, open-source contributions, or a short note about a model or system you designed and deployed in production. This is an opportunity to build foundational AI systems at one of India’s fastest-scaling GenAI startups and to impact how large enterprises engage millions of customers every day. If you are ready to transform how AI meets revenue teams—Darwix AI wants to hear from you.
Posted 1 week ago
0 years
12 - 20 Lacs
Ahmedabad, Gujarat, India
On-site
We are looking for a seasoned fullstack developer who combines hands-on technical skills with strong experience in AI/ML and a deep understanding of market & website analytics. This role will play a strategic part in building intelligent platforms and driving data-informed decisions. Key Responsibilities Lead fullstack development across products (preferably using Node.js, React/Angular, MongoDB/PostgreSQL) Build and integrate AI-powered features (NLP, predictive analytics, automation workflows, recommendation systems) Implement and analyze data from tools like Google Analytics, Mixpanel, Amplitude, Hotjar, etc. Develop dashboards and data visualizations to support marketing and product teams Collaborate with designers, data engineers, and product managers to align tech with business strategy Own product performance and conversion metrics with regular audits and improvements Evaluate user behavior, funnels, and attribution to improve product and marketing decisions Required Skills Strong fullstack capabilities (JavaScript, Node.js, React/Angular, APIs, databases) Solid understanding and practical experience in AI/ML frameworks (e.g., Python, TensorFlow, Hugging Face, OpenAI API, Langchain) Proficiency with website analytics, user behavior tracking, and A/B testing Experience in building or integrating marketing analytics systems Ability to generate insights from data and present them meaningfully to stakeholders Knowledge of SEO, CRO, and performance optimization best practices Skills:- NodeJS (Node.js), MongoDB, Python, API and Search Engine Optimization (SEO)
Posted 1 week ago
12.0 years
0 Lacs
Gurugram, Haryana, India
Remote
🧠 Job Title: Engineering Manager Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 7–12 Years Compensation: Competitive salary + ESOPs + Performance-based bonuses 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing AI-first startups, building next-gen conversational intelligence and real-time agent assist tools for sales teams globally. We’re transforming how enterprise sales happens across industries like BFSI, real estate, retail, and telecom with a GenAI-powered platform that combines multilingual transcription, NLP, real-time nudges, knowledge base integration, and performance analytics—all in one. Our clients include some of the biggest names in India, MENA, and SEA. We’re backed by marquee venture capitalists, 30+ angel investors, and operators from top AI, SaaS, and B2B companies. Our founding team comes from IITs, IIMs, BITS Pilani, and global enterprise AI firms. Now, we’re looking for a high-caliber Engineering Manager to help lead the next phase of our engineering evolution. If you’ve ever wanted to build and scale real-world AI systems for global use cases—this is your shot. 🎯 Role Overview As Engineering Manager at Darwix AI, you will be responsible for leading and managing a high-performing team of backend, frontend, and DevOps engineers. You will directly oversee the design, development, testing, and deployment of new features and system enhancements across Darwix’s AI-powered product suite. This is a hands-on technical leadership role , requiring the ability to code when needed, conduct architecture reviews, resolve blockers, and manage the overall engineering execution. You’ll work closely with product managers, data scientists, QA teams, and the founders to deliver on roadmap priorities with speed and precision. You’ll also be responsible for building team culture, mentoring developers, improving engineering processes, and helping the organization scale its tech platform and engineering capacity. 🔧 Key Responsibilities1. Team Leadership & Delivery Lead a team of 6–12 software engineers (across Python, PHP, frontend, and DevOps). Own sprint planning, execution, review, and release cycles. Ensure timely and high-quality delivery of key product features and platform improvements. Solve execution bottlenecks and ensure clarity across JIRA boards, product documentation, and sprint reviews. 2. Architecture & Technical Oversight Review and refine high-level and low-level designs proposed by the team. Provide guidance on scalable architectures, microservices design, performance tuning, and database optimization. Drive migration of legacy PHP code into scalable Python-based microservices. Maintain technical excellence across deployments, containerization, CI/CD, and codebase quality. 3. Hiring, Coaching & Career Development Own the hiring and onboarding process for engineers in your pod. Coach team members through 1:1s, OKRs, performance cycles, and continuous feedback. Foster a culture of ownership, transparency, and high-velocity delivery. 4. Process Design & Automation Drive adoption of agile development practices—daily stand-ups, retrospectives, sprint planning, documentation. Ensure production-grade observability, incident tracking, root cause analysis, and rollback strategies. Introduce quality metrics like test coverage, code review velocity, time-to-deploy, bug frequency, etc. 5. Cross-functional Collaboration Work closely with the product team to translate high-level product requirements into granular engineering plans. Liaise with QA, AI/ML, Data, and Infra teams to coordinate implementation across the board. Collaborate with customer success and client engineering for debugging and field escalations. 🔍 Technical Skills & Stack🔹 Primary Languages & Frameworks: Python (FastAPI, Flask, Django) PHP (legacy services; transitioning to Python) TypeScript, JavaScript, HTML5, CSS3 Mustache templates (preferred), React/Next.js (optional) 🔹 Databases & Storage: MySQL (primary), PostgreSQL MongoDB, Redis Vector DBs: Pinecone, FAISS, Weaviate (RAG pipelines) 🔹 AI/ML Integration: OpenAI APIs, Whisper, Wav2Vec, Deepgram Langchain, HuggingFace, LlamaIndex, LangGraph 🔹 DevOps & Infra: AWS EC2, S3, Lambda, CloudWatch Docker, GitHub Actions, Nginx Git (GitHub/GitLab), Jenkins (optional) 🔹 Monitoring & Testing: Prometheus, Grafana, Sentry PyTest, Selenium, Postman ✅ Candidate Profile👨💻 Experience: 7–12 years of total engineering experience in high-growth product companies or startups. At least 2 years of experience managing teams as a tech lead or engineering manager. Experience working on real-time data systems, microservices architecture, and SaaS platforms. 🎓 Education: Bachelor’s or Master’s degree in Computer Science or related field. Preferred background from Tier 1 institutions (IITs, BITS, NITs, IIITs). 💼 Traits We Love: You lead with clarity, ownership, and high attention to detail. You believe in building systems—not just shipping features. You are pragmatic and prioritize team delivery velocity over theoretical perfection. You obsess over latency, clean interfaces, and secure deployments. You want to build a high-performing tech org that scales globally. 🌟 What You’ll Get Leadership role in one of India’s top GenAI startups Competitive fixed compensation with performance bonuses Significant ESOPs tied to company milestones Transparent performance evaluation and promotion framework A high-speed environment where builders thrive Access to investor and client demos, roadshows, GTM huddles, and more Annual learning allowance and access to internal AI/ML bootcamps Founding-team-level visibility in engineering decisions and product innovation 🛠️ Projects You’ll Work On Real-time speech-to-text engine in 11 Indian languages AI-powered live nudges and agent assistance in B2B sales Conversation summarization and analytics for 100,000+ minutes/month Automated call scoring and custom AI model integration Multimodal input processing: audio, text, CRM, chat Custom knowledge graph integrations across BFSI, real estate, retail 📢 Why This Role Matters This is not just an Engineering Manager role. At Darwix AI, every engineering decision feeds directly into how real sales teams close deals. You’ll see your work powering real-time customer calls, nudging field reps in remote towns, helping CXOs make hiring decisions, and making a measurable impact on enterprise revenue. You’ll help shape the core technology platform of a company that’s redefining how humans and machines interact in sales. 📩 How to Apply Email your resume, GitHub/portfolio (if any), and a few lines on why this role excites you to: 📧 people@darwix.ai Subject: Application – Engineering Manager – [Your Name] If you’re a technical leader who thrives on velocity, takes pride in mentoring developers, and wants to ship mission-critical AI systems that power revenue growth across industries, this is your stage . Join Darwix AI. Let’s build something that lasts.
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer – Backend Engineering Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 4–8 Years About Darwix AI Darwix AI is building India’s most advanced GenAI-powered platform for enterprise sales teams. We combine speech recognition, LLMs, vector databases, real-time analytics, and multilingual intelligence to power customer conversations across India, the Middle East, and Southeast Asia. We’re solving complex backend problems across speech-to-text pipelines , agent assist systems , AI-based real-time decisioning , and scalable SaaS delivery . Our engineering team sits at the core of our product and works closely with AI research, product, and client delivery to build the future of revenue enablement. Backed by top-tier VCs, AI advisors, and enterprise clients, this is a chance to build something foundational. Role Overview We are hiring a Senior Python Developer to architect, implement, and optimize high-performance backend systems that power our AI platform. You will take ownership of key backend services—from core REST APIs and data pipelines to complex integrations with AI/ML modules. This role is for builders. You’ll work closely with product, AI, and infra teams, write production-grade Python code, lead critical decisions on architecture, and help shape engineering best practices. Key Responsibilities 1. Backend API Development Design and implement scalable, secure RESTful APIs using FastAPI , Flask , or Django REST Framework Architect modular services and microservices to support AI, transcription, real-time analytics, and reporting Optimize API performance with proper indexing, pagination, caching, and load management strategies Integrate with frontend systems, mobile clients, and third-party systems through clean, well-documented endpoints 2. AI Integrations & Inference Orchestration Work closely with AI engineers to integrate GenAI/LLM APIs (OpenAI, Llama, Gemini), transcription models (Whisper, Deepgram), and retrieval-augmented generation (RAG) workflows Build services to manage prompt templates, chaining logic, and LangChain flows Deploy and manage vector database integrations (e.g., FAISS , Pinecone , Weaviate ) for real-time search and recommendation pipelines 3. Database Design & Optimization Model and maintain relational databases using MySQL or PostgreSQL ; experience with MongoDB is a plus Optimize SQL queries, schema design, and indexes to support low-latency data access Set up background jobs for session archiving, transcript cleanup, and audio-data binding 4. System Architecture & Deployment Own backend deployments using GitHub Actions , Docker , and AWS EC2 Ensure high availability of services through containerization, horizontal scaling, and health monitoring Manage staging and production environments, including DB backups, server health checks, and rollback systems 5. Security, Auth & Access Control Implement robust authentication (JWT, OAuth), rate limiting , and input validation Build role-based access controls (RBAC) and audit logging into backend workflows Maintain compliance-ready architecture for enterprise clients (data encryption, PII masking) 6. Code Quality, Documentation & Collaboration Write clean, modular, extensible Python code with meaningful comments and documentation Build test coverage (unit, integration) using PyTest , unittest , or Postman/Newman Participate in pull requests, code reviews, sprint planning, and retrospectives with the engineering team Required Skills & QualificationsTechnical Expertise 3–8 years of experience in backend development with Python, PHP. Strong experience with FastAPI , Flask , or Django (at least one in production-scale systems) Deep understanding of RESTful APIs , microservice architecture, and asynchronous Python patterns Strong hands-on with MySQL (joins, views, stored procedures); bonus if familiar with MongoDB , Redis , or Elasticsearch Experience with containerized deployment using Docker and cloud platforms like AWS or GCP Familiarity with Git , GitHub , CI/CD pipelines , and Linux-based server environments Plus Points Experience working on audio processing , speech-to-text (STT) pipelines, or RAG architectures Hands-on with vector databases or LangChain , LangGraph Exposure to real-time systems, WebSockets, and stream processing Basic understanding of frontend integration workflows (e.g., with HTML/CSS/JS interfaces)
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.
Posted 1 week ago
2.0 years
0 Lacs
India
Remote
We are seeking an AI-first Operations Analyst to lead the integration and optimization of AI technologies within the Marketing team. This role is central to scaling intelligent marketing by operationalizing advanced AI/ML models, large language models (LLMs), and generative AI tools to drive growth, efficiency, and personalization across the funnel. Responsibilities AI Systems Implementation: Lead the rollout and integration of AI tools across the marketing tech stack, including platforms like OpenAI, Jasper, HubSpot AI, and custom LLMs. AI-Driven Campaign Optimization: Use AI to automate and continuously optimize digital campaigns, content distribution, lead nurturing, and personalization at scale. LLM Workflow Design & Prompt Engineering: Build and refine workflows powered by large language models (e.g., ChatGPT), enabling automated content generation, audience targeting, and internal knowledge access. Predictive Lead Scoring & Buyer Intent Modeling: Deploy AI/ML models to score leads, predict conversion likelihood, segment audiences dynamically, and route leads intelligently to sales teams. AI Automation & Process Orchestration: Automate marketing processes (e.g., reporting, content tagging, CRM updates) using AI and low-code/no-code tools like Zapier or Make. Insight Generation & Decision Support: Use AI to extract insights from marketing performance data, identify trends, recommend actions, and generate auto-summaries for stakeholders. AI Governance & Model Monitoring: Define guardrails for ethical AI usage in marketing. Track model performance, ensure data privacy compliance, and continuously improve system reliability. Qualifications: Bachelor’s degree in Computer Science, Data Science, Marketing Technology, or a related field. 2+ years of experience in AI operations, marketing technology, or data-driven marketing roles. Required Skills: Ability to build and manage custom APIs to connect disparate tools, automate workflows, and enhance marketing performance. Hands-on skills in Python, SQL, or JavaScript for data transformation and API integration. Experience building intelligent agents or copilots tailored for marketing teams to drive efficiency and insights. Experience in using AI to improve lead scoring, content generation, campaign optimization, or customer segmentation. Benefits: Remote First Policy 5 Days Working With FLEXI Hours Group Medical Insurance (Parents, Spouse, Children) Group Accident Cover Company-Sponsored Device Education Reimbursement Policy.
Posted 1 week ago
3.0 - 4.5 years
0 Lacs
India
On-site
Key Responsibilities Design and build LLM guardrails for prompt injection protection, toxicity/bias detection, and hallucination/jailbreak identification. Build and maintain evaluation frameworks to monitor LLM safety, fairness, and compliance. Develop automated pipelines to process, tag, and evaluate LLM outputs using Python and SQL. Leverage vector databases and embeddings to detect unsafe content or model drift. Create internal dashboards and visualizations (Streamlit, Dash, or lightweight React/JS) for POCs and internal tools. Collaborate with ML engineers and product teams to integrate LLM safety components into production APIs or applications. Stay current with AI safety research, emerging tools (Ragas, LangChain, Guardrails.ai), and regulatory standards (EU AI Act, NIST AI RMF). Required Qualifications 3 - 4.5 years of experience in data science, applied ML, or LLM-based applications. Strong programming skills in Python and experience writing SQL for data exploration or feature engineering. Solid understanding of NLP, deep learning (CNN/RNN/Transformers), and LLM architectures. Hands-on experience with Hugging Face, LangChain, LLM APIs (OpenAI, Anthropic), and vector stores (FAISS, Pinecone, Chroma). Familiarity with front-end basics (Streamlit, Dash, or simple HTML/CSS/JS) is a plus , not mandatory. Experience with model evaluation, red‑teaming, or safety interventions in NLP/LLM systems. (Preferred) Familiarity with deploying ML pipelines in production (Docker, FastAPI) and ability to thrive in a fast‑paced startup environment.
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Engineering Lead Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 5–10 Years Compensation: Competitive + Performance-based incentives + Meaningful ESOPs 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, building the future of enterprise revenue intelligence. We offer a GenAI-powered conversational intelligence and real-time agent assist suite that transforms how large sales teams interact, close deals, and scale operations. We’re already live with enterprise clients across India, the UAE, and Southeast Asia , and our platform enables multilingual speech-to-text, AI-driven nudges, and contextual conversation coaching—backed by our proprietary LLMs and cutting-edge voice infrastructure. With backing from top-tier VCs and over 30 angel investors, we’re now hiring an Engineering Lead who can architect, own, and scale the core engineering stack as we prepare for 10x growth. 🌟 Role Overview As the Engineering Lead at Darwix AI , you’ll take ownership of our platform architecture, product delivery, and engineering quality across the board. You’ll work closely with the founders, product managers, and the AI team to convert fast-moving product ideas into scalable features. You will: Lead backend and full-stack engineers across microservices, APIs, and real-time pipelines Architect scalable systems for AI/LLM deployments Drive code quality, maintainability, and engineering velocity This is a hands-on, player-coach role —perfect for someone who loves building but is also excited about mentoring and growing a technical team. 🎯 Key Responsibilities🛠️ Technical Leadership Own technical architecture across backend, frontend, and DevOps stacks Translate product roadmaps into high-performance, production-ready systems Drive high-quality code reviews, testing practices, and performance optimization Make critical system-level decisions around scalability, security, and reliability 🚀 Feature Delivery Work with the product and AI teams to build new features around speech recognition, diarization, real-time coaching, and analytics dashboards Build and maintain backend services for data ingestion, processing, and retrieval from Vector DBs, MySQL, and MongoDB Create clean, reusable APIs (REST & WebSocket) that power our web-based agent dashboards 🧱 System Architecture Refactor monoliths into microservice-based architecture Optimize real-time data pipelines with Redis, Kafka, and async queues Implement serverless modules using AWS Lambda, Docker containers, and CI/CD pipelines 🧑🏫 Mentorship & Team Building Lead a growing team of engineers—guide on architecture, code design, and performance tuning Foster a culture of ownership, documentation, and continuous learning Mentor junior developers, review PRs, and set up internal coding best practices 🔄 Collaboration Act as the key technical liaison between Product, Design, AI/ML, and DevOps teams Work directly with founders on roadmap planning, delivery tracking, and go-live readiness Contribute actively to investor tech discussions, client onboarding, and stakeholder calls ⚙️ Our Tech Stack Languages: Python (FastAPI, Django), PHP (legacy support), JavaScript, TypeScript Frontend: HTML, CSS, Bootstrap, Mustache templates; (React.js/Next.js optional) AI/ML Integration: LangChain, Whisper, RAG pipelines, Transformers, Deepgram, OpenAI APIs Databases: MySQL, PostgreSQL, MongoDB, Redis, Pinecone/FAISS (Vector DBs) Cloud & Infra: AWS EC2, S3, Lambda, CloudWatch, Docker, GitHub Actions, Nginx DevOps: Git, Docker, CI/CD pipelines, Jenkins/GitHub Actions, load testing Tools: Jira, Notion, Slack, Postman, Swagger 🧑💼 Who You Are 5–10 years of professional experience in backend/full-stack development Proven experience leading engineering projects or mentoring junior devs Comfortable working in high-growth B2B SaaS startups or product-first orgs Deep expertise in one or more backend frameworks (Django, FastAPI, Laravel, Flask) Experience working with AI products or integrating APIs from OpenAI, Deepgram, HuggingFace is a huge plus Strong understanding of system design, DB normalization, caching strategies, and latency optimization Bonus: exposure to working with voice pipelines (STT/ASR), NLP models, or real-time analytics 📌 Qualities We’re Looking For Builder-first mindset – you love launching features fast and scaling them well Execution speed – you move with urgency but don’t break things Hands-on leadership – you guide people by writing code, not just processes Problem-solver – when things break, you own the fix and the root cause Startup hunger – you thrive on chaos, ambiguity, and shipping weekly 🎁 What We Offer High Ownership : Directly shape the product and its architecture from the ground up Startup Velocity : Ship fast, learn fast, and push boundaries Founding Engineer Exposure : Work alongside IIT-IIM-BITS founders with full transparency Compensation : Competitive salary + meaningful equity + performance-based incentives Career Growth : Move into an EM/CTO-level role as the org scales Tech Leadership : Own features end-to-end—from spec to deployment 🧠 Final Note This is not just another engineering role. This is your chance to: Own the entire backend for a GenAI product serving global enterprise clients Lead technical decisions that define our future infrastructure Join the leadership team at a startup that’s shipping faster than anyone else in the category If you're ready to build a product with 10x potential, join a high-output team, and be the reason why the tech doesn’t break at scale , this role is for you. 📩 How to Apply Send your resume to people@darwix.ai with the subject line: “Application – Engineering Lead – [Your Name]” Attach: Your latest CV or LinkedIn profile GitHub/portfolio link (if available) A short note (3–5 lines) on why you're excited about Darwix AI and this role
Posted 1 week ago
1.0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview At BlueKaktus, we're leveraging cutting-edge cloud technology to transform the $3 trillion Fashion & Lifestyle industry. Own, architect and deliver core modules of our multi-agentic AI SaaS platform - spanning user-facing React micro-front-ends, Python micro-services, Postgres persistence and AWS infrastructure, while mentoring the next wave of engineers in a hyper-growth environment. Key Responsibilities End-to-end ownership: translate product vision into secure, scalable features; drive design, coding, review, testing and deployment. Platform evolution: design fault-tolerant, multi-tenant architectures; weave in multi-agent LLM workflows, vector search and RAG pipelines. Dev-excellence: champion CI/CD, IaC (Terraform/CDK), automated testing, observability and cost-aware cloud operations. Technical leadership: mentor 2-4 engineers, set coding standards, lead architecture reviews and sprint planning. Cross-functional collaboration: pair with Product, DevRel and GTM to ship business-impacting releases every 2-3 weeks. Must-Have Skills 4-6 yrs building production SaaS; 3 yrs in Python back-ends (FastAPI/Django/Flask) and React (hooks, TS). Deep SQL & Postgres tuning; distributed systems know-how (caching, queues, event-driven design). Hands-on AWS (EKS/Lambda, S3, Aurora, IAM) and containerisation (Docker, Kubernetes). Proven track record of shipping at >1 M MAU or >10K TPS scale. Strong DSA, design patterns, code review and mentoring chops. Nice-to-Haves LangChain / Agents / Vector DBs, OpenAI/Anthropic/LLama APIs. Experience with feature-flag systems, multi-region deployments, SOC-2 / ISO-27001 compliance. Apply now at recruitment@bluekaktus.com and join us in transforming fashion with technology!
Posted 1 week ago
5.0 years
5 - 9 Lacs
Calicut
On-site
We are excited to share a fantastic opportunity for the AI Lead/Sr. AI-ML Engineer position at Gritstone Technologies . We believe your skills and experience could be a perfect match for this role, and we would love for you to explore this opportunity with us. Design and implement scalable, high-performance AI/ML architectures with Python tailored for real-time and batch processing use cases. Lead the development of robust, end-to-end AI pipelines, including advanced data preprocessing, feature engineering, model development, and deployment. Define and drive the integration of AI solutions across cloud-native platforms (AWS, Azure, GCP) with optimized cost-performance trade-offs. Architect and deploy multimodal AI systems, leveraging advanced NLP (e.g., LLMs, OpenAI-based customizations, scanned invoice data extraction), computer vision (e.g., inpainting, super-resolution scaling, video-based avatar generation), and generative AI technologies (e.g., video and audio generation). Integrate domain-specific AI solutions, such as reinforcement learning, and self-supervised learning models. Implement distributed training and inferencing pipelines using state-of-the-art frameworks. Drive model optimization through quantization, pruning, sparsity techniques, and mixed-precision training to maximize performance across GPU hardware. Develop scalable solutions using large vision-language models (VLMs) and large language models (LLMs). Define and implement MLOps practices for version control, CI/CD pipelines, and automated model deployment using tools like Kubernetes, Docker, Kubeflow, and FastAPI. Enable seamless integration of databases (SQL Server, MongoDB, NoSQL) with AI workflows. Drive cutting-edge research in AI/ML, including advancements in RLHF, retrieval-augmented generation (RAG), and multimodal knowledge graphs. Experiment with emerging generative technologies, such as diffusion models for video generation and neural audio synthesis. Collaborate with cross-functional stakeholders to deliver AI-driven business solutions aligned with organizational goals. null 5+ years of Experience
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France