Jobs
Interviews

248 Ontologies Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

40.0 years

4 - 8 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-216753 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jul. 22, 2025 CATEGORY: Engineering [Role Name : IS Architecture] Job Posting Title: Data Architect Workday Job Profile : Principal IS Architect Department Name: Digital, Technology & Innovation Role GCF: 06A ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles & Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 8 years of experience in Computer Science, IT or related field OR Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 month ago

Apply

4.0 years

0 Lacs

India

Remote

📍 Remote | Junior Roles | <4 years experience | Industry-Specific | Full-Time  📅 Applications close: August 20th, 2025 About the Role We’re hiring Vertical Intelligence Specialists to drive innovation at the intersection of AI, 3D engineering, and compliance for heavy industries. You will serve as the strategic brain of your vertical ( Automotive / Construction / Railway / Naval / Aviation / Aerospace ) - turning complex industry knowledge into actionable product features, AI data signals, and go-to-market advantages. This role blends business insight, systems thinking, and cross-functional collaboration with our AI, engineering, and sales teams. You’ll act as a command center for your vertical, identifying client value, guiding technical teams, and fueling our platform’s intelligence. About Neovoltis Neovoltis is building the world’s first AI-native, modular engineering ecosystem for industrial sectors - combining 3D design, simulation, ontology-based AI, and multi-layered intelligence. We operate under a neural company model : modular cells, quarterly reviews, self-evolving teams. You’ll be embedded in a Vertical Cell , collaborating with the Intelligence Cell, Platform Engineering Cell, and Sales Cell — shaping the future of industrial AI, one vertical at a time. What You’ll Do 🔧 Be the voice of the vertical inside Neovoltis: • Identify and prioritize client-facing needs in your sector • Translate business realities into feature specs, simulation logic, and 3D asset needs • Lead quarterly reviews to steer your cell’s roadmap • Represent domain-specific norms, risks, and opportunities 🤝 Bridge strategy with execution: • Support the AI team with labeled data, business rules, and compliance metadata • Co-design ontologies and data flows that reflect your sector’s real-world complexity • Collaborate with sales & marketing to ensure messaging reflects technical realities and industry value • Validate outputs from the platform team to ensure product-market fit 🎯 Drive customer insight and ecosystem logic: • Engage with early users and customers from your sector • Track standards, compliance frameworks, and trends (e.g., ISO, Eurocode, UNECE, ERA, etc.) • Maintain domain-specific knowledge libraries (docs, specs, white papers, playbooks) Ideal Profile ✅ 1 to 4 years of experience in automotive , construction , or railway engineering, simulation, product, or compliance ✅ Strong ability to translate between business , data , and technical stakeholders ✅ Excellent understanding of your vertical’s standards, processes, and pain points ✅ Autonomous, structured thinker with curiosity for AI and digital twins ✅ Previous collaboration with product or engineering teams is a plus ✅ Experience in simulation, CAD/BIM, regulatory compliance, or ERP/MES systems is a plus Tools You Might Work With • Notion, Gemini, Google Chat (Cell coordination) • Docs + Sheets + Diagrams (Feature specs, compliance flows) • 3D formats (STL, STEP, IFC) and workflows • Industry standards: ISO, UNECE, ERA, EC codes, etc. • Ontology & knowledge modeling tools (training provided) • Internal GPT/LLM tools and dashboards for client behavior and trend detection What You’ll Influence ✅ Shape the roadmap of Neovoltis for your entire industry ✅ Inform how AI models understand, process, and simulate real-world behavior ✅ Define business features and compliance intelligence pipelines ✅ Contribute to the go-to-market strategy for your vertical ✅ Be the neuron of your industry in a Nobel-worthy neural company Compensation & Benefits 💰 Experience Based 🎯 Performance-based bonus 🌍 Work remotely in a fully international, AI-first company 🚀 Join a pioneering neural organization model with real strategic ownership 📚 Access to training and conferences in your industry or tech sector 🧠 Long-term career growth in a fast-evolving company at the frontier of industry and AI How to Apply 📧 Send your CV, a short motivation note, and your industry of interest to: careers@neovoltis.com 📅 Application deadline: August 20, 2025 🌐 Please indicate if you’re applying for: Automotive , Construction , Railway, Naval, Aviation or Aerospace

Posted 1 month ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master’s / Bachelor’s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management: Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Python Developer – Web Scraping & Data Processing About the Role We are seeking a skilled and detail-oriented Python Developer with hands-on experience in web scraping, document parsing (PDF, HTML, XML), and structured data extraction. You will be part of a core team working on aggregating biomedical content from diverse sources, including grant repositories, scientific journals, conference abstracts, treatment guidelines, and clinical trial databases. Key Responsibilities • Develop scalable Python scripts to scrape and parse biomedical data from websites, pre-print servers, citation indexes, journals, and treatment guidelines. • Build robust modules for splitting multi-record documents (PDFs, HTML, etc.) into individual content units. • Implement NLP-based field extraction pipelines using libraries like spaCy, NLTK, or regex for metadata tagging. • Design and automate workflows using schedulers like cron, Celery, or Apache Airflow for periodic scraping and updates. • Store parsed data in relational (PostgreSQL) or NoSQL (MongoDB) databases with efficient schema design. • Ensure robust logging, exception handling, and content quality validation across all processes. Required Skills and Qualifications • 3+ years of hands-on experience in Python, especially for data extraction, transformation, and loading (ETL). o Strong command over web scraping libraries: BeautifulSoup, Scrapy, Selenium, Playwright o Proficiency in PDF parsing libraries: PyMuPDF, pdfminer.six, PDFPlumber • Experience with HTML/XML parsers: lxml, XPath, html5lib • Familiarity with regular expressions, NLP, and field extraction techniques. • Working knowledge of SQL and/or NoSQL databases (MySQL, PostgreSQL, MongoDB). • Understanding of API integration (RESTful APIs) for structured data sources. • Experience with task schedulers and workflow orchestrators (cron, Airflow, Celery). • Version control using Git/GitHub and comfortable working in collaborative environments. Good to Have • Exposure to biomedical or healthcare data parsing (e.g., abstracts, clinical trials, drug labels). • Familiarity with cloud environments like AWS (Lambda, S3) • Experience with data validation frameworks and building QA rules. • Understanding of ontologies and taxonomies (e.g., UMLS, MeSH) for content tagging. Why Join Us • Opportunity to work on cutting-edge biomedical data aggregation for large-scale AI and knowledge graph initiatives. • Collaborative environment with a mission to improve access and insights from scientific literature. • Flexible work arrangements and access to industry-grade tools and infrastructure.

Posted 1 month ago

Apply

40.0 years

4 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Modeler position is responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Modeler drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. Roles & Responsibilities: Develop and maintain conceptual logical, and physical data models and to support business needs Contribute to and Enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: Doctorate / Master’s / Bachelor’s degree with 8-12 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills Data Modeling: Proficiency in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management : Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Implementing Data testing and data quality strategies. Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications (please mention if the certification is preferred or mandatory for the role): Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 month ago

Apply

13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title- Data Science Engineer, VP Location- Bangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Design & Develop Agentic AI Applications: Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG Pipelines: Integrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models: Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER Models: Train OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge Graphs: Construct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally: Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows: Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your Skills And Experience 13+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do: We are looking for experienced Knowledge Graph developers who have the following set of technical skillsets and experience. Undertake complete ownership in accomplishing activities and assigned responsibilities across all phases of project lifecycle to solve business problems across one or more client engagements. Apply appropriate development methodologies (e.g.: agile, waterfall) and best practices (e.g.: mid-development client reviews, embedded QA procedures, unit testing) to ensure successful and timely completion of assignments. Collaborate with other team members to leverage expertise and ensure seamless transitions; Exhibit flexibility in undertaking new and challenging problems and demonstrate excellent task management. Assist in creating project outputs such as business case development, solution vision and design, user requirements, prototypes, and technical architecture (if needed), test cases, and operations management. Bring transparency in driving assigned tasks to completion and report accurate status. Bring Consulting mindset in problem solving, innovation by leveraging technical and business knowledge/ expertise and collaborate across other teams. Assist senior team members, delivery leads in project management responsibilities. Build complex solutions using Programing languages, ETL service platform, etc. What you’ll bring: Bachelor’s or master’s degree in computer science, Engineering, or a related field. 4+ years of professional experience in Knowledge Graph development in Neo4j or AWS Neptune or Anzo knowledge graph Database. 3+ years of experience in RDF ontologies, Data modelling & ontology development Strong expertise in python, pyspark, SQL Strong ability to identify data anomalies, design data validation rules, and perform data cleanup to ensure high-quality data. Project management and task planning experience, ensuring smooth execution of deliverables and timelines. Strong communication and interpersonal skills to collaborate with both technical and non-technical teams. Experience with automation testing Performance Optimization: Knowledge of techniques to optimize knowledge graph operations like data inserts. Data Modeling: Proficiency in designing effective data models within Knowledge Graph, including relationships between tables and optimizing data for reporting. Motivation and willingness to learn new tools and technologies as per the team’s requirements. Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Experience in pharma or life sciences data: Familiarity with pharmaceutical datasets, including product, patient, or healthcare provider data, is a plus. Experience in manufacturing data is a plus Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com

Posted 1 month ago

Apply

10.0 years

0 Lacs

Bengaluru East, Karnataka, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

10.0 years

0 Lacs

Greater Delhi Area

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud®, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud® is IT Simplified. About the Role: We are seeking a Staff Product Manager with deep expertise in AI, Data Science, and Cybersecurity to lead the development of a transformative Security Data Fabric and Exposure Management Platform (ISPM, ITDR etc). In a world of siloed security tools and scattered data, your mission is to turn data chaos into clarity—helping organizations see, understand, and act on their cyber risk with precision and speed. The JumpCloud access and authentication team is changing the way IT admins and users authenticate to their JumpCloud managed IT resources for a frictionless experience to get work done. The days of the traditional corporate security perimeter are over. Remote work – and the domainless enterprise – are here to stay. As such, we believe securing all endpoints is at the crux of establishing trust, granting resource access, and otherwise managing a modern workforce. Our Cloud Directory Platform supports diverse IT endpoints from devices, SSO applications, infrastructure servers, RADIUS, and LDAP is making it easy for IT admins to manage the authentication required from MFA to zero trust using conditional access based on Identity Trust, Network Trust, Geolocation Trust, and Device Trust based on X509 certificates. If you want to build on this success and drive the future of authentication at JumpCloud come join us. You’ll be at the forefront of designing a next-generation data platform that: Creates a Security Data Fabric to unify signals from across the attack surface Uses AI to resolve entities and uncover hidden relationships Drives real-time Exposure Management to reduce risk faster than adversaries can act You will be responsible for: Define and drive the product strategy for the Security Data Fabric and Exposure Management platform (ISPM, ITDR etc) , aligned with customer needs and business goals Engage with CISOs, security analysts, and risk leaders to deeply understand pain points in exposure management and cyber risk visibility. Translate strategic objectives into clear, actionable product requirements that leverage AI/ML and data science to unify and contextualize security signals Collaborate closely with engineering, data science, UX, sales, and security research to deliver scalable and performant solutions Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics Champion a data-centric mindset—shaping features like entity resolution, risk scoring, and automated remediation workflows powered by advanced analytics You Have: 10+ years of experience in product management, with at least 5 years in cybersecurity or enterprise AI/data products Deep understanding of AI/ML, data science, entity resolution, and knowledge graphs in practical applications Experience building or integrating security analytics, threat detection, vulnerability management, or SIEM/XDR solutions Ability to untangle the interconnectedness of the complex authentication mess and simplify the same to drive the cross-functional team in the same direction Proven ability to define and deliver complex B2B platforms, especially in data-heavy, high-stakes environments Excellent communication and storytelling skills to align cross-functional teams and influence stakeholders Nice to have: Experience with graph databases, ontologies, or large-scale entity disambiguation Familiarity with security standards (MITRE ATT&CK, CVSS, etc.) and frameworks (NIST CSF, ISO 27001 etc) Prior experience launching products in cloud-native or hybrid enterprise environments Degree in Computer Science, Information Systems or Engineering. MBA is a plus Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. This role is remote in the country of India. You must be located in and authorized to work in India to be considered for this role. Language: JumpCloud® has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud®, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud® is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud®'s three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud®. Please note JumpCloud® is not accepting third party resumes at this time. JumpCloud® is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You will be working as a computational linguist in Pashan, Pune on a full-time contractual basis. The ideal candidate should have an MA in Linguistics/Applied Linguistics with First Class or Equivalent Grade from a UGC-Approved Reputable University, or an MPhil/PhD in Linguistics/Applied Linguistics with formal training in Computational Linguistics. Research publications in NLP, Semantics/Lexicology, or Discourse Studies are highly desirable. Your skill sets should include proficiency in Hindi and other Indian/foreign languages, linguistic typing, and dataset/lexicon creation. Experience in NLP Tagging, Annotation, and ML-labelled data entry is required. Familiarity with Ontologies, PoS taggers, NER, and other language processing tools (English/Hindi/Other Indian Languages) is expected. Hands-on experience in tools such as Protg, Excel, Word, and other NLP annotation platforms will be beneficial. Your responsibilities will include Lexicon Testing & Creation. You should be capable of creating Lexical Entries in English and Hindi using Microsoft Office tools and NLP software. Collaboration within teams and proactive engagement in AI & NLP learning environments is essential. Additionally, you will contribute to ML-labelled data creation and domain-specific dataset development.,

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are a Data Science Engineer who will be contributing to the development of intelligent, autonomous AI systems. The ideal candidate should have a strong background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. Your responsibilities will include deploying AI solutions that leverage technologies such as Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. As part of the flexible scheme, you will enjoy various benefits such as a best-in-class leave policy, gender-neutral parental leaves, childcare assistance benefit reimbursement, sponsorship for industry-relevant certifications, employee assistance program, comprehensive hospitalization insurance, accident and term life insurance, and health screening. Your key responsibilities will involve designing and developing Agentic AI Applications using frameworks like LangChain, CrewAI, and AutoGen, implementing RAG Pipelines, fine-tuning Language Models, training NER Models, developing Knowledge Graphs, collaborating cross-functionally, and optimizing AI workflows. To excel in this role, you should have at least 4 years of professional experience in AI/ML development, proficiency in Python, Python API frameworks, SQL, and familiarity with AI/ML frameworks like TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms, understanding of LLMs, SLMs, semantic technologies, and MLOps tools is required. Additionally, hands-on experience with vector databases, embedding techniques, and developing AI solutions for specific industries will be beneficial. You will receive support through training, coaching, and a culture of continuous learning to aid in your career progression. The company strives for a culture of empowerment, responsibility, commercial thinking, initiative, and collaboration. They promote a positive, fair, and inclusive work environment for all individuals. For further information about the company and its teams, please visit the company website at https://www.db.com/company/company.htm. Join a team that celebrates success and fosters a culture of excellence and inclusivity.,

Posted 2 months ago

Apply

6.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Forvia, a sustainable mobility technology leader We pioneer technology for mobility experience that matter to people. Your Mission, Roles And Responsibilities Sr. Data Engineer Country/Region: Pune/India Contract Type: Full time New trends and expectations are reshaping the automotive industry. Inspired by the exciting new challenges associated with this revolution, Faurecia anticipates the future of mobility developing cutting-edge solutions for smart life on board and sustainable mobility. If you’re willing to contribute and create value for tomorrow’s cleaner and smarter mobility, Faurecia is the place to be. The Digital Services Factory (DSF) is a fast growing team in charge of providing AI solutions for manufacturing excellence and efficiency as well as for smart product development for the automotive industry. In order to meet the increasing expectations we are enriching our team by adding highly skilled resources. Overall Responsibilities And Duties Data Mining from structured, semi-structure & unstructured data sources Develop Business intelligence dashboards & insights with datasets Ensure data quality, interpret & analyse the data problems Prepare data for prescriptive & predictive modelling Create ontologies for knowledge & data management Collaborate with business owners, IT Architects & Data Scientists Qualifications Bachelors/Masters Degree in Computer Science Engineering Minimum 6 to 9 years of experience in working on industrial scale projects in Big Data. Hands on experience in building data models, data mining & segmentation Good knowledge of Big data eco systems (Spark & Hadoop) Good knowledge of python & SQL Knowledge of SQL & NoSQL Databases Knowledge of PowerBI Knowledge of Dataiku, Palantir is a plus Good communication skills in English Good logical & analytical thinking Your profile and competencies to succeed What We Can Do For You At Forvia, you will find an engaging and dynamic environment where you can contribute to the development of sustainable mobility leading technologies. We are the seventh-largest global automotive supplier, employing more than 157,000 people in more than 40 countries which makes a lot of opportunity for career development. We welcome energetic and agile people who can thrive in a fast-changing environment. People who share our strong values. Team players with a collaborative mindset and a passion to deliver high standards for our clients. Lifelong learners. High performers. Globally minded people who aspire to work in a transforming industry, where excellence, speed, and quality count. We cultivate a learning environment, dedicating tools and resources to ensure we remain at the forefront of mobility. Our people enjoy an average of more than 22 hours of online and in-person training within FORVIA University (five campuses around the world) We offer a multicultural environment that values diversity and international collaboration. We believe that diversity is a strength. To create an inclusive culture where all forms of diversity create real value for the company, we have adopted gender diversity targets and inclusion action plans. Achieving CO2 Net Zero as a pioneer of the automotive industry is a priority: In June 2022, Forvia became the first global automotive group to be certified with the new SBTI Net-Zero Standard (the most ambitious standard of SBTi), aligned with the ambition of the 2015 Paris Agreement of limiting global warming to 1.5°C. Three principles guide our action: use less, use better and use longer, with a focus on recyclability and circular economy. Why join us FORVIA is an automotive technology group at the heart of smarter and more sustainable mobility. We bring together expertise in electronics, clean mobility, lighting, interiors, seating, and lifecycle solutions to drive change in the automotive industry. With a history stretching back more than a century, we are the 7th largest global automotive supplier, employing more than 157,000 people in 43 countries. You'll find our technology in around 1 out of 2 vehicles produced anywhere in the world. In June 2022, we became the 1st global automotive group to be certified with the SBTI Net-Zero Standard. We have committed to reach CO2 Net Zero by no later than 2045. As technological innovation and the need for sustainability transform the automotive industry, we are ideally positioned to deliver solutions that will enhance the lives of road-users everywhere.

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Variant Annotation Curator, your primary responsibility will be to curate, harmonize, and maintain high-quality variant annotation datasets from a variety of public and proprietary sources such as ClinVar, ClinGen, HGDM, CADD, refSeq, REVEL gnomAD, dbSNP, and COSMIC. You will be tasked with developing and implementing pipelines to ensure the harmonization of variant annotations across different formats, nomenclatures, and reference genomes. Standardizing variant representations using HGVS, VCF, and other relevant formats will also be a key aspect of your role. Collaboration with the technical operations team to deliver curated data into customer systems will be essential. Additionally, you will be responsible for performing quality control and validation of variant annotations to uphold data integrity standards. Staying updated with the latest developments in variant annotation standards and tools, as well as understanding differences between annotation database versions, will be crucial for this position. Documenting curation processes and contributing to documentation will also be part of your duties. To qualify for this role, you should have a Master's or Ph.D. in Bioinformatics, Computational Biology, Genomics, or a related field. Strong experience with variant annotation tools and databases is required. Proficiency in ETL/workflow tools like Airflow, Nextflow, and scripting languages such as Python or R is essential. Experience with version control systems like Git, familiarity with genomic data formats (VCF, BED, GFF), and reference genome builds (GRCh37/38) is necessary. Previous experience with data harmonization and integration across heterogeneous sources, knowledge of ontologies, and controlled vocabularies (e.g., ClinVar terms, Sequence Ontology) are also important qualifications. Having excellent problem-solving skills and attention to detail is crucial for success in this role. Additionally, experience with SQL (Postgres), familiarity with Kubernetes architecture, and cloud services like AWS would be advantageous in fulfilling your responsibilities as a Variant Annotation Curator.,

Posted 2 months ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Overview Job Title- Data Science Engineer, AS Location- Bangalore, India Role Description We are seeking a Data Science Engineer to contribute to the development of intelligent, autonomous AI systems The ideal candidate will have a strong background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves deploying AI solutions that leverage Retrieval-Augmented Generation (RAG) Multi-agent frameworks Hybrid search techniques to enhance enterprise applications. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Design & Develop Agentic AI Applications: Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG Pipelines: Integrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models: Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER Models: Train OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge Graphs: Construct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally: Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows: Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your Skills And Experience 4+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 months ago

Apply

8.0 years

0 Lacs

India

On-site

This role is for one of the Weekday's clients Min Experience: 8 years JobType: full-time We are seeking an experienced and detail-oriented Senior Taxonomist to join our team and take ownership of building, maintaining, and optimizing taxonomies and metadata frameworks across digital platforms. This role is critical in ensuring consistent classification, discoverability, and management of content, products, and data assets. The ideal candidate will have a strong foundation in taxonomy development, metadata strategy, content classification, and information architecture, along with a proven ability to collaborate with cross-functional teams to align taxonomy structure with business needs. Requirements Key Responsibilities: Lead the design, implementation, and maintenance of taxonomies, controlled vocabularies, metadata schemas, and ontologies for content and product classification. Analyze business requirements and user behavior to develop intuitive classification structures and tagging strategies that enhance content discoverability and search performance. Collaborate with stakeholders including product managers, content strategists, data teams, and UX designers to ensure taxonomy alignment with business goals and user needs. Conduct taxonomy audits and gap analyses to identify areas for improvement and develop plans for restructuring or optimization. Establish and enforce taxonomy governance policies, naming conventions, and documentation standards to ensure consistency and quality across platforms. Work closely with engineering and data teams to integrate taxonomy solutions into content management systems, PIM, DAM, and search platforms. Stay updated on industry best practices, tools, and technologies in taxonomy, metadata management, and information architecture. Skills and Qualifications: Bachelor's or Master's degree in Library Science, Information Science, Knowledge Management, Linguistics, or a related field. 8-15 years of hands-on experience in taxonomy development, metadata management, information architecture, or a related discipline. Strong understanding of controlled vocabularies, classification schemes, thesauri, and ontologies. Experience working with taxonomy management tools (e.g., PoolParty, Synaptica, Smartlogic, TopBraid, etc.). Proficiency in content management systems (CMS), product information management (PIM) systems, and digital asset management (DAM) platforms. Familiarity with search technologies and how taxonomy supports enterprise search, faceted navigation, and content personalization. Excellent analytical skills with the ability to interpret complex data sets and translate business needs into taxonomy solutions. Strong communication and collaboration skills, with the ability to influence stakeholders and lead cross-functional initiatives. Detail-oriented with a deep commitment to accuracy and consistency. Preferred Qualifications: Experience working in e-commerce, media, publishing, retail, or large-scale content-driven organizations. Knowledge of semantic technologies, RDF, SKOS, OWL, and linked data principles. Exposure to AI/ML tagging workflows and automation in metadata tagging and classification.

Posted 2 months ago

Apply

15.0 years

0 Lacs

Greater Hyderabad Area

Remote

Hyderabad Founded by highly respected Silicon Valley veterans - with its design centers established in Santa Clara, California. / Hyderabad/ Bangalore Machine Learning Engineer – Medical AI Agents (RAG, SAG, Generative & Agentic Frameworks) Location: Hyderabad, India (Hybrid/Remote options available) About the Role We are building next-generation AI Agents for Clinical Decision Support—systems that combine the power of LLMs, RAG/SAG architectures, and agentic reasoning to assist doctors, reduce burnout, and extend quality care to millions. As a Machine Learning Engineer, you will work at the frontier of retrieval-augmented, structured-augmented, and agentic AI systems, designing medical copilots that can converse, reason, and act safely in real-world healthcare settings. You’ll collaborate with clinicians, public health bodies, and AI researchers to bring these agents from lab to hospital floor, tailored for multilingual, culturally diverse, and resource-constrained environments. Key Responsibilities Develop and deploy intelligent Medical AI Agents using RAG (Retrieval-Augmented Generation) and SAG (Structured-Augmented Generation) to assist in diagnostics, triage, therapy planning, and patient communication Architect multi-agent LLM workflows that reflect real clinical roles (e.g., GP, nurse, pharmacist), with memory, task decomposition, and role-specific context Fine-tune and align LLMs with clinical reasoning datasets using SFT, RLHF, and preference modeling to ensure high factuality and low hallucination rates Design multimodal integration pipelines that ingest EHRs, lab reports, imaging summaries, and patient-reported data for robust contextual reasoning Collaborate directly with doctors, nurses, and medical administrators to validate agent outputs and refine interaction models Participate in user studies, clinical pilots, and co-design workshops to ensure usability and trustworthiness of deployed agents Engage with public and private healthcare partners across India, Southeast Asia, and multilingual regions to customize deployment and localization Build frameworks for auditability, explainability, and fallback behavior in line with SaMD (Software as a Medical Device) and CDSCO/FDA guidance Incorporate empathic communication techniques, including tone modulation, uncertainty handling, and culturally sensitive phrasing, into agent behavior Benchmark systems on MedQA, PubMedQA, MedMCQA, and internal evaluations to continuously improve real-world performance Key Qualifications Educational Background: Master’s or higher in Computer Science, AI/ML, Biomedical Informatics, or related field from a premier institution (e.g., IISc, IITs, IIIT-H, BITS Pilani, or top global universities) Outstanding Bachelor’s candidates considered with strong track record in LLM research, clinical AI projects, or medical NLP deployment Bonus: Coursework or thesis in Human-Centered AI, Medical Ethics, Cognitive Science, or Clinical Informatics Technical Expertise: Strong command of transformer models, RAG/SAG architectures, contextual embeddings, and retrieval orchestration tools (e.g., LangChain, LlamaIndex) Proficiency in Python, PyTorch, HuggingFace, vector DBs (FAISS, Qdrant), and EHR integration frameworks (FHIR, HL7) Experience designing agentic LLM frameworks with memory persistence, intent handling, and inter-agent communication Exposure to multimodal reasoning, including structured medical data (labs, symptoms) and unstructured text (progress notes, prescriptions) Familiarity with medical ontologies (SNOMED CT, UMLS, ICD-10) and working with large-scale clinical text corpora Understanding of clinical safety, bias risks, and hallucination controls for generative systems Experience with RLHF, zero/few-shot prompt tuning, and retrieval grounding in high-stakes settings Collaboration & Stakeholder Engagement Proven ability to collaborate with healthcare professionals, understand their mental models, and iteratively translate them into agent logic Participated in field testing, simulation environments, or real-world deployments in clinical, telemedicine, or public health settings Strong communication skills to present and defend model behavior to both technical and non-technical stakeholders Willingness to engage in design sprints, qualitative research, and in-clinic observations Empathy for users under pressure—your systems will support people making life-saving decisions Preferred Attributes Experience building for multilingual contexts or deploying in LMIC (Low- and Middle-Income Country) healthcare systems Strong understanding of AI safety, uncertainty quantification, and fallback design Familiarity with regulatory compliance in medical AI (e.g., SaMD, CDSCO, HIPAA, GDPR) Published research or open-source contributions in medical NLP, generative agents, or multi-agent LLM frameworks Experience Up to 15 years 6–8 years of experience in AI/ML (with a strong focus on NLP/LLMs/RAG frameworks), including: At least 2–3 years in independently delivering production-grade AI systems At least 1–2 years in a tech-lead or mentoring capacity, ideally in a startup, research lab, or interdisciplinary team Hands-on experience with RAG/SAG pipelines, LLM fine-tuning, and agent orchestration, not just model usage Contact: Uday Mulya Technologies muday_bhaskar@yahoo.com "Mining The Knowledge Community"

Posted 2 months ago

Apply

15.0 years

0 Lacs

Greater Hyderabad Area

Remote

Hyderabad Founded by highly respected Silicon Valley veterans - with its design centers established in Santa Clara, California. / Hyderabad/ Bangalore Machine Learning Engineer – Medical AI Agents (RAG, SAG, Generative & Agentic Frameworks) Location: Hyderabad, India (Hybrid/Remote options available) About the Role We are building next-generation AI Agents for Clinical Decision Support—systems that combine the power of LLMs, RAG/SAG architectures, and agentic reasoning to assist doctors, reduce burnout, and extend quality care to millions. As a Machine Learning Engineer, you will work at the frontier of retrieval-augmented, structured-augmented, and agentic AI systems, designing medical copilots that can converse, reason, and act safely in real-world healthcare settings. You’ll collaborate with clinicians, public health bodies, and AI researchers to bring these agents from lab to hospital floor, tailored for multilingual, culturally diverse, and resource-constrained environments. Key Responsibilities Develop and deploy intelligent Medical AI Agents using RAG (Retrieval-Augmented Generation) and SAG (Structured-Augmented Generation) to assist in diagnostics, triage, therapy planning, and patient communication Architect multi-agent LLM workflows that reflect real clinical roles (e.g., GP, nurse, pharmacist), with memory, task decomposition, and role-specific context Fine-tune and align LLMs with clinical reasoning datasets using SFT, RLHF, and preference modeling to ensure high factuality and low hallucination rates Design multimodal integration pipelines that ingest EHRs, lab reports, imaging summaries, and patient-reported data for robust contextual reasoning Collaborate directly with doctors, nurses, and medical administrators to validate agent outputs and refine interaction models Participate in user studies, clinical pilots, and co-design workshops to ensure usability and trustworthiness of deployed agents Engage with public and private healthcare partners across India, Southeast Asia, and multilingual regions to customize deployment and localization Build frameworks for auditability, explainability, and fallback behavior in line with SaMD (Software as a Medical Device) and CDSCO/FDA guidance Incorporate empathic communication techniques, including tone modulation, uncertainty handling, and culturally sensitive phrasing, into agent behavior Benchmark systems on MedQA, PubMedQA, MedMCQA, and internal evaluations to continuously improve real-world performance Key Qualifications Educational Background: Master’s or higher in Computer Science, AI/ML, Biomedical Informatics, or related field from a premier institution (e.g., IISc, IITs, IIIT-H, BITS Pilani, or top global universities) Outstanding Bachelor’s candidates considered with strong track record in LLM research, clinical AI projects, or medical NLP deployment Bonus: Coursework or thesis in Human-Centered AI, Medical Ethics, Cognitive Science, or Clinical Informatics Technical Expertise: Strong command of transformer models, RAG/SAG architectures, contextual embeddings, and retrieval orchestration tools (e.g., LangChain, LlamaIndex) Proficiency in Python, PyTorch, HuggingFace, vector DBs (FAISS, Qdrant), and EHR integration frameworks (FHIR, HL7) Experience designing agentic LLM frameworks with memory persistence, intent handling, and inter-agent communication Exposure to multimodal reasoning, including structured medical data (labs, symptoms) and unstructured text (progress notes, prescriptions) Familiarity with medical ontologies (SNOMED CT, UMLS, ICD-10) and working with large-scale clinical text corpora Understanding of clinical safety, bias risks, and hallucination controls for generative systems Experience with RLHF, zero/few-shot prompt tuning, and retrieval grounding in high-stakes settings Collaboration & Stakeholder Engagement Proven ability to collaborate with healthcare professionals, understand their mental models, and iteratively translate them into agent logic Participated in field testing, simulation environments, or real-world deployments in clinical, telemedicine, or public health settings Strong communication skills to present and defend model behavior to both technical and non-technical stakeholders Willingness to engage in design sprints, qualitative research, and in-clinic observations Empathy for users under pressure—your systems will support people making life-saving decisions Preferred Attributes Experience building for multilingual contexts or deploying in LMIC (Low- and Middle-Income Country) healthcare systems Strong understanding of AI safety, uncertainty quantification, and fallback design Familiarity with regulatory compliance in medical AI (e.g., SaMD, CDSCO, HIPAA, GDPR) Published research or open-source contributions in medical NLP, generative agents, or multi-agent LLM frameworks Experience Up to 15 years 6–8 years of experience in AI/ML (with a strong focus on NLP/LLMs/RAG frameworks), including: At least 2–3 years in independently delivering production-grade AI systems At least 1–2 years in a tech-lead or mentoring capacity, ideally in a startup, research lab, or interdisciplinary team Hands-on experience with RAG/SAG pipelines, LLM fine-tuning, and agent orchestration, not just model usage Contact: Uday Mulya Technologies muday_bhaskar@yahoo.com "Mining The Knowledge Community"

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job title: Business Capability Manager Associate Location: Hyderabad About The Job Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference on patients’ daily life, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with strong commitment to develop advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone to this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We sit in partnership with Business and Digital, and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in the development of sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary The R&D Business Capability manager associate serves as the interface between Business and Digital/Data on foundational data capabilities or needs across R&D. This role will be responsible for identification of key use cases across R&D, high level data solutioning (e.g., data strategy, governance/standards, management, infrastructure needs etc), and oversight of day-to-day operations for their specific data capability. Main Responsibilities Work in collaboration with R&D Data Office leadership, business, and R&D Digital subject matter experts: Partner with key stakeholders across R&D functions to identify data-related needs and initiatives (e.g., data catalog, master and reference data management, data products, etc.) and design innovative data solutions to support business prioritiesPartner with R&D Digital stakeholders to oversee data-related activities and verify data functionality from ingest through access Drive program management of initiatives and capabilities; ensure on-schedule/on-time delivery and proactive management of risks/issues Establish ways of working across all partner functions for specifically assigned capabilities/initiatives Annotate genomic variants (e.g., SNVs, indels, CNVs, SVs) using open source or proprietary tools on relevant databases to assess functional and clinical relevance Integrate genomic annotations with pathway databases, transcriptomic data, and disease ontologies to generate hypotheses about target biology and mechanism of action Work with multi-omics data sets (e.g., WES, WGS, RNA-seq, scRNA-seq) and curate data pipelines to ensure quality, consistency, and reproducibility of annotations Partner with wet-lab scientists, data scientists, and therapeutic area teams to deliver actionable insights for target validation, biomarker discovery, and patient stratification Contribute to the development and maintenance of automated workflows, reproducible pipelines and databases Document methodologies and present results in internal meetings, reports, and collaborative discussions with cross-functional stakeholders Deliverables Develops business case development, requirement identification, and use case development for business functions Implements business process definition, process performance, process execution, process management, and continuous improvement opportunities Maintain genomic databases, pipelines, tools and platforms undefined About You Experience: M.S + 5 years or Ph.D. +3 years hands-on experience in bioinformatics, preferably in a pharma, biotech, or translational research setting Proficient in scripting languages (e.g., Python, R, Bash) and bioinformatics tools for genomic annotation Solid understanding of human genetics, disease biology, and functional genomics data sets Familiarity with ontologies and biological knowledgebases (e.g., GO, KEGG, Reactome, DisGeNET) Experience working with NGS data and variant interpretation in the context of drug discovery or precision medicine Experience with workflow languages (e.g., Nextflow, Snakemake, WDL), Knowledge of clinical genomics standards (e.g., HGVS nomenclature, ACMG classification) Familiarity with target identification and validation pipelines or biomarker discovery programs Familiarity with Pharma R&D processes and technology, Ability to build business relationships and understand end-to-end data use and needs Diplomatic and stakeholder management skills across business, technology, and partners Demonstrated strong attention to detail, quality, time management and customer focus Excellent written and oral communications skills, Strong networking, influencing and negotiating skills and superior problem-solving skills Demonstrated willingness to make decisions and to take responsibility for such, Excellent interpersonal skills (team player) People management skills either in matrix or direct line function, M.S. or Ph.D. in Bioinformatics, Computational Biology, Genomics, or a related discipline null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.

Posted 2 months ago

Apply

2.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Data Science Manager-R&D Qualification and Experience:M.Tech / M.E / MS / M.Sc / Ph.D ; 2 Years Qualification and Experience M.Tech / M.E / MS / M.Sc / Ph.D in Computer Science or a related discipline (Applied Mathematics, Statistics, Electrical and/or Computer Engineering) with a focus on Artificial Intelligence / Machine Learning A CGPA of 7.5/10 or above in UG as well as PG level (in case of percentage, 70 PERCENT or above) 2 Years Responsibilities Responsible for applied research, development and validation of advanced machine learning and AI algorithms for solving complex real-world problems at scale Work closely with product team to understand business problems and product vision to come up with innovative algorithmic solutions Create prototypes and demonstrations to validate/establish innovative ideas and translate research outcomes to productized innovation by working with AI Engineers and/or software engineers Create and maintain research plans, conduct experiments, consolidate and document results, and publish works as appropriate Document and protect intellectual property (IP) generated from R&D efforts by collaborating with designated teams and/or external agencies Mentor junior staff to ensure that correct procedures are followed; collaborate with stakeholders, academic/research partners and peer researchers to deliver tangible outcomes Requirements of the role The candidate is expected to be strong in fundamentals of computer science as well as in analyzing and designing AI/Machine learning algorithms Hands-on experience in at least two of the following areas are required : Supervised, unsupervised and semi-supervised machine learning algorithms Reinforcement Learning Deep learning & Representation Learning Knowledge based systems (knowledge representation, reasoning and planning; FOL, ontologies etc.) Evolutionary Computing Probabilistic Graphical Models Streaming, incremental and non-stationary learning Candidate should be strong in at least one programming language, and should have experience in implementing AI/machine learning algorithms in python or R Experience with tools/frameworks/libraries such as Jupyter/Zeppelin, scikit-learn, matplotlib, pandas, Tensorflow, Keras, Apache Spark etc. would be desirable Applied research experience of at least 2 5 years on real-world problems using AI/Machine Learning techniques At least one publication in a top tier conference/journal related to AI/Machine Learning (E.g. ICML, AAAI, NIPS, ICCV, CVPR, KDD, SIGMOD, IJCNN, IJCAI, EMNLP, ICPR, ICDM etc.) and/or patents Experience in contributing to open-source projects related to AI/Machine Learning would be a strong plus Job Code: DSM_TVM Location: Trivandrum For more information, Please mail to: recruitment@flytxt.com

Posted 2 months ago

Apply

5.0 years

5 - 9 Lacs

Hyderābād

On-site

Job title: Business Capability Manager Associate Location: Hyderabad About the job Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference on patients’ daily life, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with strong commitment to develop advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone to this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We sit in partnership with Business and Digital, and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in the development of sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary: The R&D Business Capability manager associate serves as the interface between Business and Digital/Data on foundational data capabilities or needs across R&D. This role will be responsible for identification of key use cases across R&D, high level data solutioning (e.g., data strategy, governance/standards, management, infrastructure needs etc), and oversight of day-to-day operations for their specific data capability. Main responsibilities: Work in collaboration with R&D Data Office leadership, business, and R&D Digital subject matter experts: Partner with key stakeholders across R&D functions to identify data-related needs and initiatives (e.g., data catalog, master and reference data management, data products, etc.) and design innovative data solutions to support business prioritiesPartner with R&D Digital stakeholders to oversee data-related activities and verify data functionality from ingest through access Drive program management of initiatives and capabilities; ensure on-schedule/on-time delivery and proactive management of risks/issues Establish ways of working across all partner functions for specifically assigned capabilities/initiatives Annotate genomic variants (e.g., SNVs, indels, CNVs, SVs) using open source or proprietary tools on relevant databases to assess functional and clinical relevance Integrate genomic annotations with pathway databases, transcriptomic data, and disease ontologies to generate hypotheses about target biology and mechanism of action Work with multi-omics data sets (e.g., WES, WGS, RNA-seq, scRNA-seq) and curate data pipelines to ensure quality, consistency, and reproducibility of annotations Partner with wet-lab scientists, data scientists, and therapeutic area teams to deliver actionable insights for target validation, biomarker discovery, and patient stratification Contribute to the development and maintenance of automated workflows, reproducible pipelines and databases Document methodologies and present results in internal meetings, reports, and collaborative discussions with cross-functional stakeholders Deliverables Develops business case development, requirement identification, and use case development for business functions Implements business process definition, process performance, process execution, process management, and continuous improvement opportunities Maintain genomic databases, pipelines, tools and platforms undefined About you Experience : M.S + 5 years or Ph.D. +3 years hands-on experience in bioinformatics, preferably in a pharma, biotech, or translational research setting Proficient in scripting languages (e.g., Python, R, Bash) and bioinformatics tools for genomic annotation Solid understanding of human genetics, disease biology, and functional genomics data sets Familiarity with ontologies and biological knowledgebases (e.g., GO, KEGG, Reactome, DisGeNET) Experience working with NGS data and variant interpretation in the context of drug discovery or precision medicine Experience with workflow languages (e.g., Nextflow, Snakemake, WDL), Knowledge of clinical genomics standards (e.g., HGVS nomenclature, ACMG classification) Familiarity with target identification and validation pipelines or biomarker discovery programs Familiarity with Pharma R&D processes and technology, Ability to build business relationships and understand end-to-end data use and needs Diplomatic and stakeholder management skills across business, technology, and partners Demonstrated strong attention to detail, quality, time management and customer focus Excellent written and oral communications skills, Strong networking, influencing and negotiating skills and superior problem-solving skills Demonstrated willingness to make decisions and to take responsibility for such, Excellent interpersonal skills (team player) People management skills either in matrix or direct line function, M.S. or Ph.D. in Bioinformatics, Computational Biology, Genomics, or a related discipline

Posted 2 months ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Summary Assist in the timely & professional ongoing Mgmt of data Operations on Use Cases/Demand deliverables and of clinical data warehouse maintenance with respect to cost, quality and timelines within Clinical Pipeline team. Ensure high quality data available for secondary analysis use. Support content development and upgrade to training modules into engaging and interactive applications. Follows data regulations and laws, data-handling procedures and data mapping guidelines. Supports quality deliverables within Clinical Data Operations (DO). Manage data Load, Transfer from Novartis Clinical Data Lake and conform of Clinical trial data to SDTM/ADaM compliant standards within the Clinical Data Warehouse. Supports the delivery of quality data, processes and documentation contributor role in ensuring that use case/demands are executed efficiently with timely and high quality deliverables. About The Role Major accountabilities: Demonstrates potential for technical proficiency, scientific creativity, collaboration with others and independent thought. Under supervision provides input into writing specifications for use cases/demand and necessary reports to ensure high quality and consistent data -Involved in User acceptance testing (UAT) and managing data mapping activities to maintain Clinical Data Warehouse -Under supervision, participates in ongoing review of all data generated from different sources -Supports the development of communications for initiatives. Perform hands on activities to conduct data quality assessments. Creates under supervision and learns relevant data dictionaries, ontologies and vocabularies -Reporting of technical complaints / special case scenarios related to Novartis data Collaborate with other data engineering teams to ensure consistent CDISC based data standards applied Be familiar with all clinical study documents from protocol to CSR including Data Management and Biostatistic documents. Key Performance Indicators Achieve high level of quality, timeliness, cost efficiency and customer satisfaction across Clinical Data Operations activities & deliverables. No critical data findings due to Data Operations-Adherence to Novartis policy, data standards and guidelines -Customer / partner/ project feedback and satisfaction Minimum Requirements Work Experience: 3-5 years of experience in working in clinical trials data reporting Collaborating across boundaries. Knowledge of clinical data Availability of sufficient information to find and understand data Availability of data quality assessments Experience in Agile way of working would be a plus Skills CDISC SDTM/ADaM Mapping Clinical Data Management. Experience in being able to work with different legacy, historical, local data standards SQL basic knowledges Python skills would be a plus Able to work in a worldwide team Data Privacy Data Operations. Data Science. Databases. Detail Oriented. Languages English. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards

Posted 2 months ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Company Description At OneDose , we’re transforming medication management with advanced AI and data-driven solutions. Our mission: to make every dose smarter, safer, and more accessible — at scale. Patients often need help and miss medications due to cost, availability, or allergies . This is a complex clinical and supply chain challenge that requires seamless data integration, real-time intelligence, and precision recommendations. Integrate formulary data , supplier inventories , salt compositions, and clinical guidelines into a unified ontology. Build a clinical decision support system that automatically suggests Deploy real-time recommendation pipelines using Foundry’s Code Repositories and Contour (ML orchestration layer). Role Description This is a full-time, on-site role for a Palantir Foundry Developer located in Jaipur. The Palantir Foundry Developer will be responsible for building and maintaining data integration pipelines, developing analytical models, and optimizing data workflows using Palantir Foundry. Day-to-day tasks include collaborating with cross-functional teams, troubleshooting data-related issues, and ensuring data quality and compliance with industry standards. Qualifications Deep expertise in Palantir Foundry — from data integration to operational app deployment. Strong experience in building data ontologies, data pipelines (PySpark, Python), and production-grade ML workflows. Solid understanding of clinical or healthcare data (medication data, EHRs, or pharmacy systems) is a huge plus. Ability to architect scalable, secure, and compliant data solutions for highly regulated environments. Passion for solving high-impact healthcare problems using advanced technology. Bachelor's degree in Computer Science, Data Science, or a related field Why join OneDose? Impact at scale: Your work will directly improve access to medication and patient outcomes across India and globally. Cutting-edge stack: Work hands-on with Palantir Foundry, advanced AI models, and scalable cloud-native architectures. Ownership & growth: Join an agile, high-energy team where you can lead, innovate, and shape the future of healthcare.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies