Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Dear Candidate, Greetings From Binary Semantics Ltd!! Immediate Hiring For AI Chatbot Developer - Gurgaon(WFO) About Product: Chatbot: AI chatbots have become instrumental in revolutionizing various industries, including insuretech, taxation, and fleet management. In the insuretech sector, AI chatbots are employed to enhance customer interactions, providing instant assistance with policy inquiries, claims processing, and policy renewals. These chatbots leverage natural language processing to understand and respond to customer queries, improving the overall customer experience and reducing response times. In taxation, AI chatbots streamline complex processes by assisting users with tax-related questions, helping them navigate tax regulations, and providing real-time updates on changes in tax laws. These chatbots can guide users through the filing process, ensuring accuracy and compliance while simplifying the overall tax experience. Fleet management benefits from AI chatbots by automating communication and decision-making processes. Chatbots in this context can provide real-time information on vehicle locations, maintenance schedules, and fuel consumption. They enable efficient coordination of fleet activities, optimizing routes, and addressing maintenance issues promptly. This not only improves operational efficiency but also contributes to cost savings and enhanced safety. Python Developer (AI Chatbot) Education: Btech / Mtech Experience: 2-4yr Location: Gurgaon Notice Period : Immediate or Max to max 10-15 days only Job Description: Developing chatbots and voice assistants on various platforms for diverse business use-cases Work on a chatbot framework/architecture using an open-source tool or library Implement Natural Language Processing (NLP) for chatbots Integration of chatbots with Management Dashboards and CRMs Resolve complex technical design issues by analyzing the logs, debugging code, and identifying technical issues/challenges/bugs in the process Ability to understand business requirements and translate them into technical requirements Open-minded, flexible, and willing to adapt to changing situations Ability to work independently as well as on a team and learn from colleagues Ability to optimize applications for maximum speed and scalability Skills Required: Minimum 2+ years of experience in Chatbot Development using any open-source framework (eg Rasa, Botpress) Experience with both text-to-speech and speech-to-text Should have a good understanding of various Chatbot frameworks/platforms/libraries Experience with integration of bots for platforms like Facebook Messenger, Slack, Twitter, WhatsApp, etc. Experience in applying different NLP techniques to problems such as text classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bot's design Should be familiar with these terms: Tokenization, N-Grams, Stemmers, lemmatization, Part of speech tagging, entity resolution, ontology, lexicology, phonetics, intents, entities, and context. Knowledge of either SQL and NoSQL Databases such as MySQL, MongoDB, Cassandra, Redis, PostgreSQL Interested candidate may share their resume on below mentioned email id with details: Juhi.khubchandani@binarysemantics.com Total Exp: Exp in Chatbot development: Exp in RASA: CTC: ECTC: NP: Location: Ready for Gurgaon(WFO): Regards, Juhi Khubchandani Talent Acquisition Binary Semantics Ltd. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? International Card Services (ICS) Risk & Control is looking for a Manager ICS Complaint Program Reporting and Insights with specific focus on establishing the Reporting and Business Insights workstream for Complaints Program in line with the requirements of AEMP71. It will involve extensive collaboration with multiple partners across Global Servicing Group, International markets and legal entities and ICS Control Management. The Manager – ICS Complaint Program - Reporting and Insights will: Lead and develop the ICS Complaints Reporting and Insights program Will establish the analytics, insights, and regulatory reporting for ICS Complaint’s Program Collaborate directly with senior leaders to help them understand complaints trends and how they can respond to them. Identify complaint themes leveraging data insights and referring them to ICS and LE leadership as appropriate Proactively analyze risk trends, undertake root-cause analysis, and provide consultative support to business and stakeholders. Ensure all regulatory requests are managed with 100% accuracy and timeliness. The Manager, Complaints Reporting and Insights will: Design, build and maintain dashboards and automated reports leveraging ICS Complaints data Analyze complaint data to help identify root cause, areas of concern and potential issues Compile thematic risk reporting (levels, trends, causes) to provide actionable and meaningful insights into business on current risk levels, emerging trends, and root causes Translate complex data into concise, impactful visualizations and presentations for decision making Proactively identify opportunities to improve data quality, reporting processes and analytical capabilities Utilize Natural Language Processing and generative AI tools to automate report generation, summarize data insights and improve data storytelling Collaborate with stakeholders to define KPIs, reporting needs and performance metrics Research and implement AI driven BI innovations to continuously enhance business insights and reporting best practices Required Qualifications: 5+ years of experience in Data Analytics, generating Business Insights or similar role Proficient analytical and problem-solving skills, with an ability to analyze data, identify trends, and evaluate risk scenarios effectively Hands-on experience with Python, R, Tableau Developer or Tableau Desktop Certified Professional, Power BI, Cornerstone, SQL, HIVE, Advance MS Excel (Macros, Pivots). Hands on experience with AI/ ML frameworks, NLP, Sentiment analysis and Text summarization etc. Strong analytical, critical thinking and problem-solving skills Ability to communicate complex findings clearly to both technical and non-technical audiences Preferred Qualifications: Bachelor’s degree in business, Risk Mgmt, Statistics, Computer Science, or related field; advanced degrees (e.g., MBA, MSc) or certifications are advantageous Experience in at least one of the following: Providing identification of operational risks throughout business processes and systems Enhancing risk assessments and associated methodologies Reviewing and creating thematic risk reporting to provide actionable insights into risk levels, emerging trends, and root causes Experience in the financial services industry Experience in Big Data, Data Science will be a definite advantage Familiarity with ERP systems or business process tools Knowledge of predictive analytics We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role – AIML Data Scientist Location : Coimbatore Mode of Interview - In Person Job Description Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? We are looking for a Manager ICS Complaint Program Reporting and Insights with specific focus on establishing the Reporting and Business Insights workstream for Complaints Program in line with the requirements of AEMP71. It will involve extensive collaboration with multiple partners across Global Servicing Group, International markets and legal entities and ICS Control Management. The Manager – ICS Complaint Program - Reporting and Insights will: Lead and develop the ICS Complaints Reporting and Insights program Will establish the analytics, insights, and regulatory reporting for ICS Complaint’s Program Collaborate directly with senior leaders to help them understand complaints trends and how they can respond to them. Identify complaint themes leveraging data insights and referring them to ICS and LE leadership as appropriate Proactively analyze risk trends, undertake root-cause analysis, and provide consultative support to business and stakeholders. Ensure all regulatory requests are managed with 100% accuracy and timeliness. The Manager, Complaints Reporting and Insights will: Design, build and maintain dashboards and automated reports leveraging ICS Complaints data Analyze complaint data to help identify root cause, areas of concern and potential issues Compile thematic risk reporting (levels, trends, causes) to provide actionable and meaningful insights into business on current risk levels, emerging trends, and root causes Translate complex data into concise, impactful visualizations and presentations for decision making Proactively identify opportunities to improve data quality, reporting processes and analytical capabilities Utilize Natural Language Processing and generative AI tools to automate report generation, summarize data insights and improve data storytelling Collaborate with stakeholders to define KPIs, reporting needs and performance metrics Research and implement AI driven BI innovations to continuously enhance business insights and reporting best practices Required Qualifications: 8+ years of experience in Data Analytics, generating Business Insights or similar role Proficient analytical and problem-solving skills, with an ability to analyze data, identify trends, and evaluate risk scenarios effectively Hands-on experience with Python, R, Tableau Developer or Tableau Desktop Certified Professional, Power BI, Cornerstone, SQL, HIVE, Advance MS Excel (Macros, Pivots). Hands on experience with AI/ ML frameworks, NLP, Sentiment analysis and Text summarization etc. Strong analytical, critical thinking and problem-solving skills Ability to communicate complex findings clearly to both technical and non-technical audiences Preferred Qualifications: Bachelor’s degree in business, Risk Mgmt, Statistics, Computer Science, or related field; advanced degrees (e.g., MBA, MSc) or certifications are advantageous Experience in at least one of the following: Providing identification of operational risks throughout business processes and systems Enhancing risk assessments and associated methodologies Reviewing and creating thematic risk reporting to provide actionable insights into risk levels, emerging trends, and root causes Experience in the financial services industry Experience in Big Data, Data Science will be a definite advantage Familiarity with ERP systems or business process tools Knowledge of predictive analytics Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurgaon
On-site
TechStack :Cloud Technologies, Python, ML libraries, Prompt tuning and few-shot learning techniques, RAG, ReactJS, NodeJS etc What we are looking for As the Technology Analyst, you’ll leverage cutting-edge cloud-based solutions such as AWS and Azure. We are seeking an individual who not only possesses the requisite technical expertise but also thrives in the dynamic landscape of a fast-paced global firm. What you’ll do ▪ Collaborate with cross-functional teams and domain experts to design and build AI-powered solutions (e.g., GenAI chatbots, summarizers, recommendation engines) ▪ Work with LLMs, prompt engineering, and Retrieval Augmented Generation (RAG) frameworks ▪ Analyze data and extract meaningful insights using Python, SQL, and ML libraries ▪ Prototype, test, and fine-tune AI models for tasks like classification, entity extraction, and summarization ▪ Support end-to-end implementation from ideation to deployment while ensuring scalability and performance. Must have Bachelor's degree in engineering, preferably in CS, IT, or electronics with a record of academic excellence. Strong foundation in Python and hands-on experience with AI/ML concepts Familiarity with tools like Hugging Face, LangChain, OpenAI APIs, or similar is a plus Interest in applying AI to practical use cases (bonus if you’ve worked on GenAI projects or built a chatbot) Problem-solving mindset, strong communication skills, and eagerness to learn ▪ Ability to thrive in a fast-paced, collaborative environment
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
The candidate should have experience in AI Development including experience in developing, deploying, and optimizing AI and Generative AI solutions. The ideal candidate will have a strong technical background, hands-on experience with modern AI tools and platforms, and a proven ability to build innovative applications that leverage advanced AI techniques. You will work collaboratively with cross-functional teams to deliver AI-driven products and services that meet business needs and delight end-users. Job Duties and Responsibilities: Define and maintain project plans, schedule, requirements, document for product deliverables. Define project scope, deliverables, roles, and responsibilities in collaboration with Product Owner and stakeholders as per defined organization framework (XPMC) Follow agile methodology and maintain team dashboard (KANBAN) – Assign work to project team, track, and monitor team deliverables. Provide recommendations based on best practices and industry standards. Work closely with the team to ensure adherence to schedule timelines. Key Prerequisites Experience in AI and Generative AI Development Experience in Design, develop, and deploy AI models for various use cases, such as predictive analytics, recommendation systems, and natural language processing (NLP). Experience in Building and fine-tuning Generative AI models for applications like chatbots, text summarization, content generation, and image synthesis. Experience in implementation and optimization of large language models (LLMs) and transformer-based architectures (e.g., GPT, BERT). Experience in ingestion and cleaning of data Feature Engineering and Data Engineering Experience in Design and implementation of data pipelines for ingesting, processing, and storing large datasets. Experience in Model Training and Optimization Exposure to deep learning models and fine-tuning pre-trained models using frameworks like TensorFlow, PyTorch, or Hugging Face. Exposure to optimization of models for performance, scalability, and cost-efficiency on cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI). Hands-on experience in monitoring and improving model performance through retraining and evaluation metrics like accuracy, precision, and recall. AI Tools and Platform Expertise OpenAI, Hugging Face MLOps tools Generative AI-specific tools and libraries for innovative applications. Technical Skills Strong programming skills in Python (preferred) or other languages like Java, R, or Julia. Expertise in AI frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. Proficiency in working with transformer-based models (e.g., GPT, BERT, T5, DALL-E). Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization tools (Docker, Kubernetes). Solid understanding of databases (SQL, NoSQL) and big data processing tools (Spark, Hadoop). Show more Show less
Posted 2 weeks ago
25.0 years
0 Lacs
India
Remote
Welo Data works with technology companies to provide datasets that are high-quality, ethically sourced, relevant, diverse, and scalable to supercharge their AI models. As a Welocalize brand, WeloData leverages over 25 years of experience in partnering with the world’s most innovative companies and brings together a curated global community of over 500,000 AI training and domain experts to offer services that span: ANNOTATION & LABELLING: Transcription, summarization, image and video classification and labeling. ENHANCING LLMs: Prompt engineering, SFT, RLHF, red teaming and adversarial model training, model output ranking. DATA COLLECTION & GENERATION: From institutional languages to remote field audio collection. RELEVANCE & INTENT: Culturally nuanced and aware, ranking, relevance, and evaluation to train models for search, ads, and LLM output. Want to join our Welo Data team? We bring practical, applied AI expertise to projects. We have both strong academic experience and a deep working knowledge of state-of-the-art AI tools, frameworks, and best practices. Help us elevate our clients' Data at Welo Data. Shape the Future of AI — On Your Terms At Welo Data, we’re reimagining how people and machines understand each other. As part of the Welocalize family, we partner with leading global companies to power inclusive, human-centered AI — built on high-quality language data. We’re building a global network of talented linguists, language enthusiasts, and culturally curious contributors ready to shape the next wave of technology through the power of language. This is your space to grow, learn, and connect on your schedule. Join Our Talent Community Whether you're a professional linguist or just passionate about how language and technology intersect, Welo Data welcomes you. By joining our talent pool, you’ll be first in line for future task-based projects in areas like annotation, evaluation, and prompt creation. When a suitable opportunity opens up, we’ll invite you to a short qualification process, which may include training, assessments, or onboarding steps depending on the project. Who We're Looking For: - Native or near-native fluency in Hindi (Romanized) - Proficient in English (written and spoken) - Comfortable using digital tools and working remotely - Naturally detail-oriented, curious, and eager to learn - Open to working on a wide variety of language-focused tasks Why Choose Welo Data? - Limitless You – Work on your terms. Whether you're just starting out or deepening your expertise, Welo Data gives you the flexibility to grow your skills, explore new projects, and balance life on your own schedule. - Limitless AI – Be part of the technology revolution. Your contributions will help train and improve AI systems that touch millions of lives, making them more inclusive, intelligent, and human-centered. - Be Part of Us – Join a vibrant, global community of language lovers, technologists, and creatives working together to shape a more connected world. - Opportunity – Be the first to access projects that match your skills and availability. If you're passionate about language, technology, and shaping the future of AI, we want to hear from you. Apply now by answering a few quick questions to join our community. 📬 Got questions? Reach out to us at JobPosting@welocalize.com Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are a technology-led healthcare solutions provider. We are driven by our purpose to enable healthcare organizations be future ready. We offer accelerated, global growth opportunities for talent that’s bold, industrious and nimble. With Indegene, you gain a unique career experience that celebrates entrepreneurship, and is guided by passion, innovation, collaboration and empathy. To explore exciting opportunities at the convergence of healthcare and technology, check out www.careers.indegene.com Looking to jump-start your career? We understand how important the first few years of your career are, which create the foundation of your entire professional journey. At Indegene, we promise you a differentiated career experience. You will not only work at the exciting intersection of healthcare and technology but also will be mentored by some of the most brilliant minds in the industry. We are offering a global fast track career where you can grow along with Indegene’s high-speed growth. We are purpose-driven. We enable healthcare organizations be future ready and our customer obsession is our driving force . We ensure that our customers achieve what they truly want. We are bold in our actions, nimble in our decision-making, and industrious in the way we work. If this excites you, then apply below. Roles and responsibilities: Responsible for authoring Clinical Evaluation Reports (CER), Clinical Evaluation Plans (CEP), Post-Market Surveillance Reports (PMSR), Periodic Safety Update Reports (PSUR), Annual Summary Reports (ASR), Post-Market Clinical Follow-up Plans and Evaluation Reports(PMCFP/PMCFER) Acquire knowledge of therapeutic areas, competitor devices, current clinical/market developments, literature review processes and the ability to keep abreast of current literature Develop literature search and data extraction strategy for search, screening, and summarization of articles, and develop in-depth knowledge and understanding of current scientific literature Participate in and/or perform comprehensive literature searches to support identified product lines and related clinical studies Stay informed about applicable clinical landscapes and trends Review literature search results and interpret and summarize risks, alternate therapies and device specific benefits; collect and summarize primary data to support risk assessment Critically appraise scientific literature and write clinical summaries for products literature to elucidate the clinical problem and current treatment techniques Evaluate data for similar competitor devices Perform data fact check of the documents authored Collaborate with the project/program stakeholders for product information to develop quality content for CERs, within the required timelines Manage assigned client account and ensure successful on time delivery of all deliverables as listed in the SoW Responsible for end to end technical execution of the project Work with PMO to ensure resources with the right skill set are assigned to the project Work with PMO for resource allocation and end-to-end project plan Act as client point of contact for day-to-day communication and project execution Guiding writers on end-to-end execution of the assigned deliverable Day-to-day communication with primary and supporting writer for project execution Review queries for project Kick-off meetings and status update calls Review assets tracker Guiding team members in product understanding, gathering inputs, literature search strategy and systematic literature review, literature screening and data extraction Working as subject matter expert for reviewing, revising and improving the quality of scientific content and content created by primary writers Skills: Experience in leading a team Experience in creating process flows, SOPs, Templates Good understanding of medical devices and an overall understanding of the medical field In-depth knowledge of EU MDR, MEDDEV 2.7.1 Rev 4, IMDRF and MDCG; translate the client requirements and apply in drafting of CE documents Good knowledge on EU MDR specifics related to Clinical Evaluation, Clinical risks and Benefits, Safety and Performance etc Strong flair and passion for technical writing Strong written and verbal communication/presentation skills Being up-to-date with the latest technical/scientific developments and relating them to various projects Ability to understand client requirements and KPIs Qualifications: Graduate or Post Graduate in Life Sciences (Pharm.D/M.Pharm/BDS/MBBS) or Bio Medical Engineering with 5 to 7 years experience 4+ experience in med device clinical affairs domain Sound experience in the application of therapeutic and device knowledge for development of clinical evaluation documents Ability to identify critical information needs and identify roles / individuals to involve for decision making within clinical evaluation assessment and report development. Strong experience in conducting literature searches, reviews and appraisal of the scientific data Clear and effective communication, both verbal and written Excellent critical and analytical thinking skills Review experience in clinical evaluation (CEP/CER/SSCP) and post-market deliverables (PMSR/PSUR/PMCFP/PMCFER), IVDRs (PEP/PER) High level of attention to detail and accuracy Able to work effectively withcross-functional teams Able to manage multiple projects across numerousdisciplines. Strong communication, presentation and interpersonal skills with high attention to detail and organization Consistent dedication and strong work-ethic to help meet aggressive timelines or multiple projects when necessary People management, with ability to manage a team of 3-5 writers EQUAL OPPORTUNITY Indegene is proud to be an Equal Employment Employer and is committed to the culture of Inclusion and Diversity. We do not discriminate on the basis of race, religion, sex, colour, age, national origin, pregnancy, sexual orientation, physical ability, or any other characteristics. All employment decisions, from hiring to separation, will be based on business requirements, candidate’s merit and qualification. We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, gender identity, sexual orientation, disability status, protected veteran status, or any other characteristics. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: We are seeking a talented and motivated AI Engineer with expertise in Large Language Models (LLMs), Natural Language Processing (NLP), and Speech-to-Text technologies. As part of our dynamic team, you will develop, implement, and optimize cutting-edge AI solutions to improve our products and services. Your work will focus on leveraging language models, building NLP systems, and integrating speech-to-text technologies for seamless communication and enhanced user experiences. Location: Hyderabad (5 Days work from office) Working Days: Sunday- Thursday Timings: 10 AM-6 PM Key Responsibilities: LLM Development & Integration: Fine-tune and deploy large language models for specific applications, such as chatbots, content generation, and customer support Evaluate and improve the performance of LLMs in real-world scenarios NLP System Design: Design and implement NLP algorithms for tasks like text classification, sentiment analysis, entity recognition, and summarization Work with large datasets to train and validate NLP models Collaborate with cross-functional teams to identify and address language-based challenges Speech-to-Text Implementation: Develop and optimize speech-to-text pipelines for various languages and dialects Integrate speech recognition systems with NLP and LLM solutions for end-to-end functionality Stay updated on the latest advancements in automatic speech recognition (ASR) Performance Optimization: Enhance AI model efficiency for scalability and real-time processing Address biases, improve accuracy, and ensure robustness in all models Research and Innovation: Stay abreast of the latest research in LLM, NLP, and speech technologies Experiment with emerging techniques and integrate them into company solutions Documentation and Collaboration: Maintain comprehensive documentation for models, processes, and systems Collaborate with product managers, software engineers, and other stakeholders Requirements Qualifications: Minimum 8+ years of experience required Bachelor's/Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field Proven experience in LLM development (e.g., OpenAI, GPT, or similar frameworks) Strong understanding of NLP techniques and libraries (e.g., spaCy, NLTK, Hugging Face) Hands-on experience with speech-to-text systems like Google Speech API, Whisper, or similar technologies Proficiency in programming languages such as Python, along with frameworks like TensorFlow or PyTorch Strong problem-solving skills, a collaborative mindset, and the ability to manage multiple projects simultaneously Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
India
Remote
Summary of Position Teladoc Health’s Internal Audit function provides independent, objective assurance and consulting services designed to add value and improve Teladoc’s operations. The IT Audit Manager assists Internal Audit senior management, Board of Directors, and company senior management in the effective discharge of their responsibilities by bringing a systematic, disciplined approach to evaluate and improve the effectiveness of internal controls, risk management, and governance processes. Further, the Senior IT Auditor will assist and lead internal audit projects with a focus on the examination and analysis of IT processes, risks and internal controls supporting the digital, financial, operational, compliance, and strategic aspects of the company. Shift time - 05:00 PM IST - 02:00 AM IST 100% Remote work Essential Duties and Responsibilities Develop internal audit methodologies and contribute to the annual internal audit plan. Plan, organize, and conduct internal audit projects in alignment with the annual plan or as requested by management or the Audit Committee. Support activities related to the company's assessment of Internal Controls Over Financial Reporting (ICFR), including conducting IT process walkthroughs, testing, and summarization of results to support our Sarbanes Oxley (SOX) program. This is an individual contributor role that will lead SOX testing program areas such as scoping, scheduling, stakeholder management, workpaper review, etc. Ensure that documentation supporting audit testing is sufficient, competent, and relevant to support conclusions. Prepare high-quality internal audit workpapers and reports to accurately reflect audit work performed. Identify and monitor internal control gaps or outstanding issues within IT procedures, processes, or systems, ensuring appropriate remedial action. Collaborate with process and control owners and external audit personnel throughout the audit lifecycle. Educate and advise process and control owners on internal control requirements and promote awareness of internal audit within the organization. Stay updated on business and IT activities, accounting standards, and industry developments. Communicate business insights, impacts, and actionable recommendations to management. Work with internal audit leadership to identify current and emerging risks facing the organization. Identify opportunities to promote efficiencies using data analytics and automation. Assist in departmental projects, strategic initiatives, and investigations as needed. The time spent on each responsibility reflects an estimate and is subject to change dependent on business needs. Supervisory Responsibilities No Qualifications Expected for Position Minimum of 5+ years of IT Audit experience in public accounting and/or internal audit, preferably with a publicly traded company and experienced in SOX testing. Bachelor's degree in Accounting, Finance, Information Systems, Computer Science, or a related field; a Master's degree is a plus. Fundamental understanding of core Information Technology processes and systems. Knowledge of internal control concepts and frameworks (COSO, COBIT), Sarbanes-Oxley standards, and auditing processes. Extensive experience auditing Sarbanes Oxley (SOX) IT General Controls and (ITGC) IT Automated Controls (ITAC), including testing the completeness and accuracy (C&A) of key reports supporting business processes. Experience with testing various systems and technologies, such as ERP systems, cloud technologies, and other enterprise applications. Strong interpersonal, analytical, communication, and organizational skills (written and verbal). Ability to work independently with limited supervision. Strong work ethic, self-accountability, and high standards of ethical conduct. Experience coaching and mentoring junior team members. The above qualifications, knowledge, experience, and/or background are expected but not required for this role. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As an AI Product Manager, you will oversee project delivery to ensure high quality and client satisfaction, manage client relationships, and lead high-performing teams. Tech Skills - Intermediate to advance knowledge in - AI systems, Integration of AI systems with workflows Gen AI technologies (Information retrieval, Summarization, Orchestration) and Agentic AI platforms Full-stack product design Project management methodologies and tools Probability and Statistics Practical Machine Learning, including key pitfalls and approaches to address them Deep Learning algorithms SQL and Python MS Office applications – Excel and PowerPoint Non-Tech Skills - Strong business acumen with an ability to assess financial impact of decisions – both in the operations of running the delivery team and in the context of delivering solutions to clients Ability to recognize pragmatic alternatives vis-à-vis a perfect solution and get the delivery teams on-board to pursue them, balancing priorities of time with potential business impact Strong people skills, including conflict resolution, empathy, communication, listening, and negotiation Leadership and mentorship to the delivery team Ability to storyboard presentations effectively and hold conversations with senior executives in client businesses Self-driven with a strong sense of ownership Solution proposals, collaborating with growth, customer success and central solutioning functions Ability to hold conversations with senior and C-suite level executives in client businesses Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Verification of tax invoice with the supporting provided by the client Analyze Sales MIS and provide summarized information regarding – sales done in the reporting period, trend analysis (volume & price achieved), registrations done, amount collected, etc. Comment on the pricing achieved (including breakup between base price and additional charges) – highlight cases where the transaction price is an outlier Review sales cancellation data and comment on the same (based on discussions with the Developer’s representative) Review of Booking & Allotment letters issuance, Registrations status of sold units Review of Cash flow for the project during the review period. Provide a summarized cash flow statement of the project – from the date of monitoring by consultant as well as for the reporting period (data to be provided by the Developer and consultant to review the same) Compare and trace Sales MIS figures (collections) with the books of accounts and bank statements – highlight & comment on variances observed. Audit of the account statements of each of the Accounts to validate the collections received. Conducted continuous monitoring and reporting of Sales, CRM, and Project outflow activities. Monitor collection from customers in non-escrow accounts. Conduct monthly reviews and reporting on sales and inflows as well as outflows for brokerage costs, marketing costs, and tenant payments Skills: finance,mis reporting,cash flow analysis,booking & allotment letter review,financial reporting,real estate advisory – technical & regulatory due diligence and monitoring,financial analysis,sales trend analysis,variance analysis,crm monitoring,cash flow management,customer collection monitoring,pricing analysis,financial due diligence,financial auditing,data summarization,sales cancellation review,tax invoice verification,cash flow review,sales analysis,finance closure,asset management,continuous monitoring and reporting,project audit,sales mis analysis,project monitoring,bank statements,trend analysis Show more Show less
Posted 2 weeks ago
0 years
3 - 5 Lacs
Chennai
On-site
Primary Responsibilities: Design and develop AI-driven web applications using Streamlit and LangChain. Implement multi-agent workflows with LangGraph. Integrate Claude 3 (via AWS Bedrock) into intelligent systems for document and image processing. Work with FAISS for vector search and similarity matching. Develop document integration solutions for PDF, DOCX, XLSX, PPTX, and image-based formats. Implement OCR and summarization features using EasyOCR, PyMuPDF, and AI models. Create features such as spell-check, chatbot accuracy tracking, and automatic re-training pipelines. Build secure apps with SSO authentication, transcript downloads, and reference link generation. Integrate external platforms like Confluence, SharePoint, ServiceNow, Veeva Vault, Outlook, G.Net/G.Share, and JIRA. Collaborate on architecture, performance optimization, and deployment. Required Skills: Strong expertise in Streamlit, LangChain, LangGraph, and Claude 3 (AWS Bedrock). Hands-on experience with boto3, FAISS, EasyOCR, and PyMuPDF. Advanced skills in document parsing and image/video-to-text summarization. Proficient in modular architecture design and real-time AI response systems. Experience in enterprise integration with tools like ServiceNow, Confluence, Outlook, and JIRA. Familiar with chatbot monitoring and retraining strategies. Secondary Skills: Working knowledge of PostgreSQL, JSON, and file I/O with Python libraries like os, io, time, datetime, and typing. Experience with dataclasses and numpy for efficient data handling and numerical process About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
6.0 - 8.0 years
20 - 30 Lacs
Thāne
On-site
Key Responsibilities: Develop and Fine-Tune LLMs (e.g., GPT-4, Claude, LLaMA, Mistral, Gemini) using instruction tuning, prompt engineering, chain-of-thought prompting, and fine-tuning techniques. Build RAG Pipelines: Implement Retrieval-Augmented Generation solutions leveraging embeddings, chunking strategies, and vector databases like FAISS, Pinecone, Weaviate, and Qdrant. Implement and Orchestrate Agents: Utilize frameworks like MCP, OpenAI Agent SDK, LangChain, LlamaIndex, Haystack, and DSPy to build dynamic multi-agent systems and serverless GenAI applications. Deploy Models at Scale: Manage model deployment using HuggingFace, Azure Web Apps, vLLM, and Ollama, including handling local models with GGUF, LoRA/QLoRA, PEFT, and Quantization methods. Integrate APIs: Seamlessly integrate with APIs from OpenAI, Anthropic, Cohere, Azure, and other GenAI providers. Ensure Security and Compliance: Implement guardrails, perform PII redaction, ensure secure deployments, and monitor model performance using advanced observability tools. Optimize and Monitor: Lead LLMOps practices focusing on performance monitoring, cost optimization, and model evaluation. Work with AWS Services: Hands-on usage of AWS Bedrock, SageMaker, S3, Lambda, API Gateway, IAM, CloudWatch, and serverless computing to deploy and manage scalable AI solutions. Contribute to Use Cases: Develop AI-driven solutions like AI copilots, enterprise search engines, summarizers, and intelligent function-calling systems. Cross-functional Collaboration: Work closely with product, data, and DevOps teams to deliver scalable and secure AI products. Required Skills and Experience: Deep knowledge of LLMs and foundational models (GPT-4, Claude, Mistral, LLaMA, Gemini). Strong expertise in Prompt Engineering, Chain-of-Thought reasoning, and Fine-Tuning methods. Proven experience building RAG pipelines and working with modern vector stores ( FAISS, Pinecone, Weaviate, Qdrant ). Hands-on proficiency in LangChain, LlamaIndex, Haystack, and DSPy frameworks. Model deployment skills using HuggingFace, vLLM, Ollama, and handling LoRA/QLoRA, PEFT, GGUF models. Practical experience with AWS serverless services: Lambda, S3, API Gateway, IAM, CloudWatch. Strong coding ability in Python or similar programming languages. Experience with MLOps/LLMOps for monitoring, evaluation, and cost management. Familiarity with security standards: guardrails, PII protection, secure API interactions. Use Case Delivery Experience: Proven record of delivering AI Copilots, Summarization engines, or Enterprise GenAI applications. Experience 6-8 years of experience in AI/ML roles, focusing on LLM agent development, data science workflows, and system deployment. Demonstrated experience in designing domain-specific AI systems and integrating structured/unstructured data into AI models. Proficiency in designing scalable solutions using LangChain and vector databases. Job Type: Full-time Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Benefits: Health insurance Schedule: Monday to Friday Work Location: In person
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
TechStack :Cloud Technologies, Python, ML libraries, Prompt tuning and few-shot learning techniques, RAG, ReactJS, NodeJS etc What We Are Looking For As the Technology Analyst, you’ll leverage cutting-edge cloud-based solutions such as AWS and Azure. We are seeking an individual who not only possesses the requisite technical expertise but also thrives in the dynamic landscape of a fast-paced global firm. What You’ll Do ▪ Collaborate with cross-functional teams and domain experts to design and build AI-powered solutions (e.g., GenAI chatbots, summarizers, recommendation engines) ▪ Work with LLMs, prompt engineering, and RetrievalAugmented Generation (RAG) frameworks ▪ Analyze data and extract meaningful insights using Python, SQL, and ML libraries ▪ Prototype, test, and fine-tune AI models for tasks like classification, entity extraction, and summarization ▪ Support end-to-end implementation from ideation to deployment while ensuring scalability and performance. Must have Bachelor's degree in engineering, preferably in CS, IT, or electronics with a record of academic excellence. Strong foundation in Python and hands-on experience with AI/ML concepts Familiarity with tools like Hugging Face, LangChain, OpenAI APIs, or similar is a plus Interest in applying AI to practical use cases (bonus if you’ve worked on GenAI projects or built a chatbot) Problem-solving mindset, strong communication skills, and eagerness to learn ▪ Ability to thrive in a fast-paced, collaborative environment Note: This is a paid internship.Skills: nodejs,reactjs,ml libraries,prompt tuning,sql,few-shot learning techniques,cloud technologies,rag,python Show more Show less
Posted 2 weeks ago
7.0 - 10.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: About Us* At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services* Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Global Finance was set up in 2007 as a part of the CFO Global Delivery strategy to provide offshore delivery to Line of Business and Enterprise Finance functions. The capabilities hosted include Legal Entity Controllership, General Accounting & Reconciliations, Regulatory Reporting, Operational Risk and Control Oversight, Finance Systems Support, etc. Our group supports the Finance function for the Consumer Banking, Wealth and Investment Management business teams across Financial Planning and Analysis, period end close, management reporting and data analysis. Job Description* Candidate will be responsible for developing & validating dashboards and business reports using Emerging Technology tools like MicroStrategy, Tableau, Alteryx, etc. The candidate will be responsible for delivering complex and time critical data mining and analytical projects for the Consumer & Small Business Banking, lending products such as Credit Cards and in addition will be responsible for analysis of data for decision making by senior leadership. Candidate will be responsible for data management, data extraction and upload, data validation, scheduling & process automation, report preparation, etc. The individual will play a key role in the team responsible for financial data reporting, adhoc reporting & data requirements, data analytics & business analysis and would manage multiple projects in parallel by ensuring adequate understanding of the requirements and deliver data driven insights and solutions to complex business problems. These projects would be time critical which would require the candidate to comprehend & evaluate the strategic business drivers to bring in efficiencies through automation of existing reporting packages or codes. The work would be a mix of standard and ad-hoc deliverables based on dynamic business requirements. Technical competency is essential to build processes which ensure data quality and completeness across all projects / requests related to business. The core responsibility of this individual is process management to achieve sustainable, accurate and well controlled results. Candidate should have a clear understanding of the end-to-end process (including its purpose) and a discipline of managing and continuously improving those processes. Responsibilities* Preparation and maintenance of various KPI reporting (Consumer lending such as Credit Cards) including performing data or business driven deep dive analysis. Credit Cards rewards reporting, data mining & analytics. Understand business requirements and translate those into deliverables. Support the business on periodic and ad-hoc projects related to consumer lending products. Develop and maintain codes for the data extraction, manipulation, and summarization on tools such as SQL, SAS, Emerging technologies like MicroStrategy, Tableau and Alteryx. Design solutions, generate actionable insights, optimize existing processes, build tool-based automations, and ensure overall program governance. Managing and improve the work: develop full understanding of the work processes, continuous focus on process improvement through simplification, innovation, and use of emerging technology tools, and understanding data sourcing and transformation. Managing risk: managing and reducing risk, proactively identify risks, issues and concerns and manage controls to help drive responsible growth (ex: Compliance, procedures, data management, etc.), establish a risk culture to encourage early escalation and self-identifying issues. Effective communication: deliver transparent, concise, and consistent messaging while influencing change in the teams. Extremely good with numbers and ability to present various business/finance metrics, detailed analysis, and key observations to Senior Business Leaders. Requirements* Education* Masters/Bachelor’s Degree in Information Technology/Computer Science/ MCA or MBA finance Experience Range* 7-10 years of relevant work experience in data analytics & reporting, business analysis & financial reporting in banking or credit card industry. Exposure to Consumer banking businesses would be an added advantage. Experience around credit cards reporting & analytics would be preferable. Foundational skills* Strong abilities in data extraction, data manipulation and business analysis and strong financial acumen. Strong computer skills, including MS excel, Teradata SQL, SAS and emerging technologies like MicroStrategy, Alteryx, Tableau. Prior Banking and Financial services industry experience, preferably Retail banking, and Credit Cards. Strong business problem solving skills, and ability to deliver on analytics projects independently, from initial structuring to final presentation. Strong communication skills (both verbal and written), Interpersonal skills and relationship management skills to navigate the complexities of aligning stakeholders, building consensus, and resolving conflicts. Proficiency in Base SAS, Macros, SAS Enterprise Guide Querying data from multiple source Experience in data extraction, transformation & loading using SQL/SAS. Proven ability to manage multiple and often competing priorities in a global environment. Manages operational risk by building strong processes and quality control routines. SQL: Querying data from multiple source Data Quality and Governance: Ability to clean, validate and ensure data accuracy and integrity. Troubleshooting: Expertise in debugging and optimizing SAS and SQL codes. Desired Skills Ability to effectively manage multiple priorities under pressure and deliver as well as being able to adapt to changes. Able to work in a fast paced, deadline-oriented environment. Multiple stakeholder management Attention to details: Strong focus on data accuracy and documentation. Work Timings* 11:30 pm to 8:30 pm (will require to stretch 7-8 days in a month to meet critical deadlines) Job Location* Mumbai Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
About Us We're building the world’s first AI Super-Assistant purpose-built for enterprises and professionals. Our platform is designed to supercharge productivity, automate workflows, and redefine the way teams work with AI. Our two core products: ChatLLM – Designed for professionals and small teams, offering conversational AI tailored for everyday productivity. Enterprise Platform – A robust, secure, and highly customizable platform for organizations seeking to integrate AI into every facet of their operations. We’re on a mission to redefine enterprise AI – and we’re looking for engineers ready to build the connective tissue between AI and the systems that power modern business. Role: Connector Integration Engineer – File Systems & Productivity Platforms As a Connector Integration Engineer focused on file systems, you’ll be responsible for building robust and secure integrations with enterprise content and collaboration platforms. You’ll enable our AI to retrieve, index, and interact with organizational documents, enabling powerful search, automation, and summarization features. What You’ll Do Build and manage connectors for productivity platforms including: SharePoint OneDrive Google Drive Work with Microsoft Graph APIs and related SDKs for document access Implement secure file access via OAuth2 and delegated permissions Enable metadata indexing and real-time syncing of document repositories Collaborate with product and AI teams to build document-aware AI experiences Troubleshoot access control issues, token lifecycles, and permission scopes Write reliable, maintainable backend code for secure data sync What We’re Looking For Experience building or working with SharePoint, OneDrive, or Google Drive APIs Strong understanding of document permissions, OAuth2, and delegated access Proficiency in backend languages like Python or TypeScript Ability to design integrations for structured and unstructured content Familiarity with API rate limits, refresh tokens, and file metadata models Solid communication skills and a bias for shipping clean, well-tested code Nice to Have Experience with Microsoft Graph SDK, webhook handling, or file system events Knowledge of indexing, search, or document summarization workflows Background in collaboration tools, SaaS products, or enterprise IT Exposure to modern security and compliance practices (SOC 2, ISO 27001) Candidates from top-tier tech environments or universities are encouraged to apply What We Offer Remote-first work environment Opportunity to shape the future of AI in the enterprise Work with a world-class team of AI researchers and product builders Flat team structure with real impact on product and direction $60,000 USD annual salary Ready to help our AI assistant work smarter with enterprise files? Join us – and power the world’s first AI Super-Assistant. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Design, develop, and deploy NLP systems using advanced LLM architectures (e.g., GPT, BERT, LLaMA, Mistral) tailored for real-world applications such as chatbots, document summarization, Q&A systems, and more. Implement and optimize RAG pipelines, combining LLMs with vector search engines (e.g., FAISS, Weaviate, Pinecone) to create context-aware, knowledge-grounded responses. Integrate external knowledge sources, including databases, APIs, and document repositories, to enrich language models with real-time or domain-specific information. Fine-tune and evaluate pre-trained LLMs, leveraging techniques like prompt engineering, LoRA, PEFT, and transfer learning to customize model behavior. Collaborate with data engineers and MLOps teams to ensure scalable deployment and monitoring of AI services in cloud environments (e.g., AWS, GCP, Azure). Build robust APIs and backend services to serve NLP/RAG models efficiently and securely. Conduct rigorous performance evaluation and model validation, including accuracy, latency, bias/fairness, and explainability (XAI). Stay current with advancements in AI research, particularly in generative AI, retrieval systems, prompt tuning, and hybrid modeling strategies. Participate in code reviews, documentation, and cross-functional team planning to ensure clean and maintainable code. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AI Engineer Location: Gurgaon (On-site) Type: Full-Time Experience: 2–6 Years Role Overview We are seeking a hands-on AI Engineer to architect and deploy production-grade AI systems that power our real-time voice intelligence suite. You will lead AI model development, optimize low-latency inference pipelines, and integrate GenAI, ASR, and RAG systems into scalable platforms. This role combines deep technical expertise with team leadership and a strong product mindset. Key Responsibilities Build and deploy ASR models (e.g., Whisper, Wav2Vec2.0) and diarization systems for multi-lingual, real-time environments. Design and optimize GenAI pipelines using OpenAI, Gemini, LLaMA, and RAG frameworks (LangChain, LlamaIndex). Architect and implement vector database systems (FAISS, Pinecone, Weaviate) for knowledge retrieval and indexing. Fine-tune LLMs using SFT, LoRA, RLHF, and craft effective prompt strategies for summarization and recommendation tasks. Lead AI engineering team members and collaborate cross-functionally to ship robust, high-performance systems at scale. Preferred Qualification 2–6 years of experience in AI/ML, with demonstrated deployment of NLP, GenAI, or STT models in production. Proficiency in Python, PyTorch/TensorFlow, and real-time architectures (WebSockets, Kafka). Strong grasp of transformer models, MLOps, and low-latency pipeline optimization. Bachelor’s/Master’s in CS, AI/ML, or related field from a reputed institution (IITs, BITS, IIITs, or equivalent). What We Offer Compensation: Competitive salary + equity + performance bonuses Ownership: Lead impactful AI modules across voice, NLP, and GenAI Growth: Work with top-tier mentors, advanced compute resources, and real-world scaling challenges Culture: High-trust, high-speed, outcome-driven startup environment Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Guindy, Tamil Nadu, India
On-site
Company Description Bytezera is a data services provider that specialise in AI and data solutions to help businesses maximise their data potential. With expertise in data-driven solution design, machine learning, AI, data engineering, and analytics, we empower organizations to make informed decisions and drive innovation. Our focus is on using data to achieve competitive advantage and transformation. About the Role We are seeking a highly skilled and hands-on AI Engineer to drive the development of cutting-edge AI applications using the latest in Computer vision, STT, Large Language Models (LLMs) , agentic frameworks , and Generative AI technologies . This role covers the full AI development lifecycle—from data preparation and model training to deployment and optimization—with a strong focus on NLP and open-source foundation models . You will be directly involved in building and deploying goal-driven, autonomous AI agents and scalable AI systems for real-world use cases. Key Responsibilities Computer Vision Development Design and implement advanced computer vision models for object detection, image segmentation, tracking, facial recognition, OCR, and video analysis. Fine-tune and deploy vision models using frameworks like PyTorch, TensorFlow, OpenCV, Detectron2, YOLO, MMDetection , etc. Optimize inference pipelines for real-time vision processing across edge devices, GPUs, or cloud-based systems. Speech-to-Text (STT) System Development Build and fine-tune ASR (Automatic Speech Recognition) models using toolkits such as Whisper, NVIDIA NeMo, DeepSpeech, Kaldi, or wav2vec 2.0 . Develop multilingual and domain-specific STT pipelines optimized for real-time transcription and high accuracy. Integrate STT into downstream NLP pipelines or agentic systems for transcription, summarization, or intent recognition. LLM and Agentic AI Design & Development Build and deploy advanced LLM-based AI agents using frameworks such as LangGraph , CrewAI , AutoGen , and OpenAgents . Fine-tune and optimize open-source LLMs (e.g., GPT-4 , LLaMA 3 , Mistral , T5 ) for domain-specific applications. Design and implement retrieval-augmented generation (RAG) pipelines with vector databases like FAISS , Weaviate , or Pinecone . Develop NLP pipelines using Hugging Face Transformers , spaCy , and LangChain for various text understanding and generation tasks. Leverage Python with PyTorch and TensorFlow for training, fine-tuning, and evaluating models. Prepare and manage high-quality datasets for model training and evaluation. Experience & Qualifications 2+ years of hands-on experience in AI engineering , machine learning , or data science roles. Proven track record in building and deploying computer vision and STT AI application . Experience with agentic workflows or autonomous AI agents is highly desirable. Technical Skills Languages & Libraries:Python, PyTorch, TensorFlow, Hugging Face Transformers, LangChain, spaCy LLMs & Generative AI:GPT, LLaMA 3, Mistral, T5, Claude, and other open-source or commercial models Agentic Tooling:LangGraph, CrewAI, AutoGen, OpenAgents Vector databases (Pinecone or ChromaDB) DevOps & Deployment: Docker, Kubernetes, AWS (SageMaker, Lambda, Bedrock, S3) Core ML Skills: Data preprocessing, feature engineering, model evaluation, and optimization Qualifications:Education: Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or a related field. Show more Show less
Posted 2 weeks ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Job Description About Fractal What makes Fractal a GREAT fit for you? When you join Fractal, you’ll be part of a fast-growing team that helps our clients leverage AI together with the power of behavioural sciences to make better decisions. We’re a strategic analytics partner to most admired fortune 500 companies globally, we help them power every human decision in the enterprise by bringing analytics, AI and behavioural science to the decision. Our people enjoy a collaborative work environment, exceptional training and career development — as well as unlimited growth opportunities. We have a Glassdoor rating of 4 / 5 and achieve customer NPS of 9/ 10. If you like working with a curious, supportive, high-performing team, Fractal is the place for you. close. Responsibilities Design and implement advanced solutions utilizing Large Language Models (LLMs). Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions. Conduct research and stay informed about the latest developments in generative AI and LLMs. Develop and maintain code libraries, tools, and frameworks to support generative AI development. Participate in code reviews and contribute to maintaining high code quality standards. Engage in the entire software development lifecycle, from design and testing to deployment and maintenance. Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility. Possess strong analytical and problem-solving skills. Demonstrate excellent communication skills and the ability to work effectively in a team environment. Primary Skills Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation. Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis. Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities. Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization. Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred. Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git. Tech Skills (10+ Years’ Experience): Machine Learning (ML) & Deep Learning Solid understanding of supervised and unsupervised learning. Proficiency with deep learning architectures like Transformers, LSTMs, RNNs, etc. Generative AI: Hands-on experience with models such as OpenAI GPT4, Anthropic Claude, LLama etc. Knowledge of fine-tuning and optimizing large language models (LLMs) for specific tasks. Natural Language Processing (NLP): Expertise in NLP techniques, including text preprocessing, tokenization, embeddings, and sentiment analysis. Familiarity with NLP tasks such as text classification, summarization, translation, and question-answering. Retrieval-Augmented Generation (RAG): In-depth understanding of RAG pipelines, including knowledge retrieval techniques like dense/sparse retrieval. Experience integrating generative models with external knowledge bases or databases to augment responses. Data Engineering: Ability to build, manage, and optimize data pipelines for feeding large-scale data into AI models. Search and Retrieval Systems: Experience with building or integrating search and retrieval systems, leveraging knowledge of Elasticsearch, AI Search, ChromaDB, PGVector etc. Prompt Engineering: Expertise in crafting, fine-tuning, and optimizing prompts to improve model output quality and ensure desired results. Understanding how to guide large language models (LLMs) to achieve specific outcomes by using different prompt formats, strategies, and constraints. Knowledge of techniques like few-shot, zero-shot, and one-shot prompting, as well as using system and user prompts for enhanced model performance. Programming & Libraries: Proficiency in Python and libraries such as PyTorch, Hugging Face, etc. Knowledge of version control (Git), cloud platforms (AWS, GCP, Azure), and MLOps tools. Database Management: Experience working with SQL and NoSQL databases, as well as vector databases APIs & Integration: Ability to work with RESTful APIs and integrate generative models into applications. Evaluation & Benchmarking: Strong understanding of metrics and evaluation techniques for generative models. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Lead - Data Scientist Join our high-performing team, honored with the prestigious "Outstanding Data Engineering Team" Award at DES 2025 for setting new benchmarks in data excellence. About the Role: We are seeking a highly skilled and GCP-certified AI/ML Engineer with expertise in Generative AI models , Natural Language Processing (NLP) , and cloud-native development with 8+Yrs of Expertise . This role involves designing and deploying scalable ML solutions, building robust APIs, and integrating AI capabilities into enterprise applications. The ideal candidate will also have a solid background in software engineering and DevOps practices. Key Responsibilities: Design, develop, and implement GenAI and NLP models using Python and relevant libraries (Transformers, LangChain, etc.) Deploy ML models and pipelines on GCP (Vertex AI, BigQuery, Cloud Functions) and Azure ML/Azure Services Develop and manage RESTful APIs for model integration Apply ML Ops and CI/CD best practices using tools like GitHub Actions, Azure DevOps, or Jenkins Ensure solutions follow software engineering principles , including modularity, reusability, and scalability Work in Agile teams, contributing to sprint planning, demos, and retrospectives Collaborate with cross-functional teams to define use cases and deliver PoCs and production-ready solutions Optimize performance and cost efficiency of deployed models and cloud services Maintain strong documentation and follow secure coding and data privacy standards. Required Skills: Strong programming skills in Python with experience in ML & NLP frameworks (e.g., TensorFlow, PyTorch, spaCy, Hugging Face) Experience with Generative AI models (OpenAI, PaLM, LLaMA, etc.) Solid understanding of Natural Language Processing (NLP) concepts like embeddings, summarization, NER, etc. Proficiency in GCP services Vertex AI, Cloud Run, Cloud Storage, BigQuery (GCP Certification is mandatory) Familiarity with Azure ML / Azure Functions / Azure API Management Experience in building and managing REST APIs Hands-on with CI/CD tools and containerization (Docker, Kubernetes is a plus) Strong grasp of software engineering concepts and Agile methodology Preferred Qualifications: Bachelors or Master’s in Computer Science, Data Science, or related field Experience working with cross-platform AI integration Exposure to LLM Ops / Prompt Engineering Certification in Azure AI Engineer or Azure Data Scientist is a plus Location: Chennai, Coimbatore, Pune, Bangalore Experience : 8 -12 Years Regards, TA Team Show more Show less
Posted 2 weeks ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job description 🚀 Job Title: ML Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the ML Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in / vishnu.sethi@cur8.in Subject Line: Application – ML Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2