Jobs
Interviews

248 Ontologies Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

0 Lacs

india

Remote

This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Product Manager — Food Intelligence in Anywhere. We are seeking a highly experienced Senior Product Manager to lead the Food Intelligence stream, connecting machine learning models, food ontologies, backend services, and user-facing experiences into a seamless nutrition tracking solution. In this role, you will oversee the product lifecycle, drive cross-functional alignment, and translate complex technical insights into features that delight users. You will collaborate closely with engineering, design, ML, and data teams while managing relationships with key stakeholders. This role offers high visibility and impact within a fast-moving, remote-first environment, empowering you to shape next-generation food and nutrition experiences for a global audience. Accountabilities Own the end-to-end product lifecycle for Food Intelligence features, including Vision AI, Meal Session Detection, food tracker, and nutrition insights Translate user research and design insights into actionable product decisions that balance user delight, technical feasibility, and business goals Partner with ML, data ontology, backend, and design teams to deliver intelligence-driven experiences Lead roadmap planning, backlog management, and delivery across engineering, design, research, and QA Serve as primary point of contact for external stakeholders, managing day-to-day requirements and integration details Ensure alignment between product vision, technical capabilities, and business objectives Balance speed, quality, and stakeholder expectations while navigating complex organizational structures Requirements 5-8 years of product management experience, with 2-4 years in a senior role Experience with ML/AI-driven products or complex data systems preferred Strong user-centric mindset and experience collaborating with design and research teams Ability to operate at the intersection of technical systems and user-facing applications Proven ability to manage multiple stakeholders in complex organizations with resilience and diplomacy Excellent communication and influencing skills across teams and cultures Experience in health, wellness, or nutrition is a plus Benefits Full-time, contract-based role with long-term commitment 100% remote work from anywhere Join a supportive, high-performing, globally distributed team Opportunity to work on high-visibility, impactful projects shaping the future of food intelligence Flexible, autonomous, and collaborative work culture Recognition as a Most Loved Workplace® for outstanding team culture Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching. When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly. 🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements. 📊 It compares your profile to the job's core requirements and past success factors to determine your match score. 🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role. 🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed. The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role. Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team. Thank you for your interest!

Posted 13 hours ago

Apply

3.0 years

15 - 32 Lacs

noida, uttar pradesh, india

On-site

About Us CLOUDSUFI, a Google Cloud Premier Partner , a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance. Our Values We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community. Equal Opportunity Statement CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/. Job Summary We are seeking a highly innovative and skilled AI Engineer to join our AI CoE for the Data Integration Project. The ideal candidate will be responsible for designing, developing, and deploying intelligent assets and AI agents that automate and optimize various stages of the data ingestion and integration pipeline. This role requires expertise in machine learning, natural language processing (NLP), knowledge representation, and cloud platform services, with a strong focus on building scalable and accurate AI solutions. Key Responsibilities LLM-based Auto-schematization: Develop and refine LLM-based models and techniques for automatically inferring schemas from diverse unstructured and semi-structured public datasets and mapping them to a standardized vocabulary. Entity Resolution & ID Generation AI: Design and implement AI models for highly accurate entity resolution, matching new entities with existing IDs and generating unique, standardized IDs for newly identified entities. Automated Data Profiling & Schema Detection: Develop AI/ML accelerators for automated data profiling, pattern detection, and schema detection to understand data structure and quality at scale. Anomaly Detection & Smart Imputation: Create AI-powered solutions for identifying outliers, inconsistencies, and corrupt records, and for intelligently filling missing values using machine learning algorithms. Multilingual Data Integration AI: Develop AI assets for accurately interpreting, translating (leveraging automated tools with human-in-the-loop validation), and semantically mapping data from diverse linguistic sources, preserving meaning and context. Validation Automation & Error Pattern Recognition: Build AI agents to run comprehensive data validation tool checks, identify common error types, suggest fixes, and automate common error corrections. Knowledge Graph RAG/RIG Integration: Integrate Retrieval Augmented Generation (RAG) and Retrieval Augmented Indexing (RIG) techniques to enhance querying capabilities and facilitate consistency checks within the Knowledge Graph. MLOps Implementation: Implement and maintain MLOps practices for the lifecycle management of AI models, including versioning, deployment, monitoring, and retraining on a relevant AI platform. Code Generation & Documentation Automation: Develop AI tools for generating reusable scripts, templates, and comprehensive import documentation to streamline development. Continuous Improvement Systems: Design and build learning systems, feedback loops, and error analytics mechanisms to continuously improve the accuracy and efficiency of AI-powered automation over time. Required Skills And Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field. Proven experience (e.g., 3+ years) as an AI/ML Engineer, with a strong portfolio of deployed AI solutions. Strong expertise in Natural Language Processing (NLP), including experience with Large Language Models (LLMs) and their applications in data processing. Proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn). Hands-on experience with cloud AI/ML services, Understanding of knowledge representation, ontologies (e.g., Schema.org, RDF), and knowledge graphs. Experience with data quality, validation, and anomaly detection techniques. Familiarity with MLOps principles and practices for model deployment and lifecycle management. Strong problem-solving skills and an ability to translate complex data challenges into AI solutions. Excellent communication and collaboration skills. Preferred Qualifications Experience with data integration projects, particularly with large-scale public datasets. Familiarity with knowledge graph initiatives. Experience with multilingual data processing and AI. Contributions to open-source AI/ML projects. Experience in an Agile development environment. Benefits Opportunity to work on a high-impact project at the forefront of AI and data integration. Contribute to solidifying a leading data initiative's role as a foundational source for grounding Large Models. Access to cutting-edge cloud AI technologies. Collaborative, innovative, and fast-paced work environment. Significant impact on data quality and operational efficiency. Skills:- Natural Language Processing (NLP), Large Language Models (LLM) tuning, Machine Learning (ML), Retrieval Augmented Generation (RAG), Python and Generative AI

Posted 2 days ago

Apply

3.0 years

0 Lacs

india

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.

Posted 3 days ago

Apply

9.0 years

0 Lacs

hyderābād

On-site

What We’re Looking For: Must have 9+ years of application development experience in the following: Java 17 Spring, Spring Boot Rest API, Multi-Threading SQL Server database Experience using AI coding assistants (e.g. GitHub Copilot, Code Whisperer) to accelerate development while maintaining high code quality. Proficiency in developing and implementing MCP servers to enable seamless integration between AI assistants and external systems, tools, and data sources. Strong familiarity with agentic AI principles, including autonomous decision-making systems, adaptive learning mechanisms, and intelligent agent architectures that can operate independently while learning from interactions. Advanced skills in prompt engineering techniques and model fine-tuning to optimize AI performance for specific use cases and domain requirements. Hands-on experience with leading agentic frameworks such as LangChain, Semantic Kernel, CrewAI, and similar platforms for building sophisticated AI agent systems and workflows. Understanding of RDF (Resource Description Framework), SPARQL query language, and SHACL (Shapes Constraint Language) for data validation and modeling. Experience with semantic graph databases, particularly Stardog, and domain-specific ontologies including FIBO/CDM frameworks. Strong Knowledge of Java distributed computing technologies, Spring, REST, and modern Java web technologies The right candidate would also demonstrate solid OO programming including Object Oriented Design Patterns and have strong opinions on best programming practices Well versed with continuous integration and continuous delivery tools and techniques Improve and maintain continuous deployment methodologies including working with SQA teams to enforce unit, regression, and integration testing. Work closely with analysts to gather business requirements, develop and deliver highly scalable and numerate financial applications Validate developed solutions to ensure that requirements met, and the results meet the business needs Personal competencies Strong analytical, investigative, and problem-solving skills. Commitment to producing quality work in a timely manner. Must be confident, articulate and fast learner. Willing to progress in an exciting, fast paced environment. Self-starter with a natural curiosity to learn and develop capabilities. Strong oral and written communication skills Interpersonal skills and ability to work cooperatively with cross functional teams. Strong team player who is comfortable working on a variety of projects using diverse technologies

Posted 3 days ago

Apply

6.0 - 9.0 years

0 Lacs

hyderābād

On-site

Must have 6-9 years of application development experience in the following: Java 17 Spring, Spring Boot Rest API, Multi-Threading SQL Server database Experience using AI coding assistants (e.g. GitHub Copilot, Code Whisperer) to accelerate development while maintaining high code quality. Proficiency in developing and implementing MCP servers to enable seamless integration between AI assistants and external systems, tools, and data sources. Strong familiarity with agentic AI principles, including autonomous decision-making systems, adaptive learning mechanisms, and intelligent agent architectures that can operate independently while learning from interactions. Advanced skills in prompt engineering techniques and model fine-tuning to optimize AI performance for specific use cases and domain requirements. Understanding of RDF (Resource Description Framework), SPARQL query language, and SHACL (Shapes Constraint Language) for data validation and modeling. Experience with semantic graph databases, particularly Stardog, and domain-specific ontologies including FIBO/CDM frameworks. Strong Knowledge of Java distributed computing technologies, Spring, REST, and modern Java web technologies The right candidate would also demonstrate solid OO programming including Object Oriented Design Patterns and have strong opinions on best programming practices Well versed with continuous integration and continuous delivery tools and techniques Improve and maintain continuous deployment methodologies including working with SQA teams to enforce unit, regression, and integration testing. Work closely with analysts to gather business requirements, develop and deliver highly scalable and numerate financial applications Validate developed solutions to ensure that requirements met, and the results meet the business needs Personal competencies Strong analytical, investigative, and problem-solving skills. Commitment to producing quality work in a timely manner. Must be confident, articulate and fast learner. Willing to progress in an exciting, fast paced environment. Self-starter with a natural curiosity to learn and develop capabilities. Strong oral and written communication skills Interpersonal skills and ability to work cooperatively with cross functional teams. Strong team player who is comfortable working on a variety of projects using diverse technologies

Posted 3 days ago

Apply

3.0 years

0 Lacs

india

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.

Posted 3 days ago

Apply

0.0 - 3.0 years

12 - 24 Lacs

selaiyur, chennai, tamil nadu

On-site

We are looking for a forward-thinking Data Scientist with expertise in Natural Language Processing (NLP), Large Language Models (LLMs), Prompt Engineering, and Knowledge Graph construction. You will be instrumental in designing intelligent NLP pipelines involving Named Entity Recognition (NER), Relationship Extraction, and semantic knowledge representation. The ideal candidate will also have practical experience in deploying Python-based APIs for model and service integration. This is a hands-on, cross-functional role where you’ll work at the intersection of cutting-edge AI models and domain-driven knowledge extraction. Key Responsibilities: Develop and fine-tune LLM-powered NLP pipelines for tasks such as NER, coreference resolution, entity linking, and relationship extraction. Design and build Knowledge Graphs by structuring information from unstructured or semi-structured text. Apply Prompt Engineering techniques to improve LLM performance in few-shot, zero-shot, and fine-tuned scenarios. Evaluate and optimize LLMs (e.g., OpenAI GPT, Claude, LLaMA, Mistral, or Falcon) for custom domain-specific NLP tasks. Build and deploy Python APIs (using Flask/Fast API) to serve ML/NLP models and access data from graph database. Collaborate with teams to translate business problems into structured use cases for model development. Understanding custom ontologies and entity schemas for corresponding domain. Work with graph databases like Neo4j or similar DBs and query using Cypher or SPARQL. Evaluate and track performance using both standard metrics and graph-based KPIs. Required Skills & Qualifications: Strong programming experience in Python and libraries such as PyTorch, TensorFlow, spaCy, scikit-learn, Hugging Face Transformers, LangChain, and OpenAI APIs. Deep understanding of NER, relationship extraction, co-reference resolution, and semantic parsing. Practical experience in working with or integrating LLMs for NLP applications, including prompt engineering and prompt tuning. Hands-on experience with graph database design and knowledge graph generation. Proficient in Python API development (Flask/FastAPI) for serving models and utilities. Strong background in data preprocessing, text normalization, and annotation frameworks. Understanding of LLM orchestration with tools like LangChain or workflow automation. Familiarity with version control, ML lifecycle tools (e.g., MLflow), and containerization (Docker). Nice to Have: Experience using LLMs for Information Extraction, summarization, or question answering over knowledge bases. Exposure to Graph Embeddings, GNNs, or semantic web technologies (RDF, OWL). Experience with cloud-based model deployment (AWS/GCP/Azure). Understanding of retrieval-augmented generation (RAG) pipelines and vector databases (e.g., Chroma, FAISS, Pinecone). Job Type: Full-time Pay: ₹1,200,000.00 - ₹2,400,000.00 per year Ability to commute/relocate: Selaiyur, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Natural Language Processing (NLP): 3 years (Preferred) Language: English & Tamil (Preferred) Location: Selaiyur, Chennai, Tamil Nadu (Preferred) Work Location: In person

Posted 4 days ago

Apply

3.0 years

0 Lacs

india

On-site

Note: Please do not apply if your salary expectations are higher than the provided Salary Range and experience less than 3 years. If you have experience with Travel Industry and worked on Hotel, Car Rental or Ferry Booking before then we can negotiate the package. Company Description: Our company is involved in promoting Greece for the last 25 years through travel sites visited from all around the world with 10 million visitors per year such www.greeka.com, www.ferriesingreece.com etc Through the websites, we provide a range of travel services for a seamless holiday experience such online car rental reservations, ferry tickets, transfers, tours etc….. Role Description: We are seeking a highly skilled Artificial Intelligence / Machine Learning Engineer to join our dynamic team. You will work closely with our development team and QAs to deliver cutting-edge solutions that improve our candidate screening and employee onboarding processes. Major Responsibilities & Job Requirements include: • Develop and implement NLP/LLM Models. • Minimum of 3-4 years of experience as an AI/ML Developer or similar role, with demonstrable expertise in computer vision techniques. • Develop and implement AI models using Python, TensorFlow, and PyTorch. • Proven experience in computer vision, including fine-tuning OCR models (e.g., Tesseract, Layoutlmv3 , EasyOCR, PaddleOCR, or custom-trained models). • Strong understanding and hands-on experience with RAG (Retrieval-Augmented Generation) architectures and pipelines for building intelligent Q&A, document summarization, and search systems. • Experience working with LangChain, LLM agents, and chaining tools to build modular and dynamic LLM workflows. • Familiarity with agent-based frameworks and orchestration of multi-step reasoning with tools, APIs, and external data sources. • Familiarity with Cloud AI Solutions, such as IBM, Azure, Google & AWS. • Work on natural language processing (NLP) tasks and create language models (LLM) for various applications. • Design and maintain SQL databases for storing and retrieving data efficiently. • Utilize machine learning and deep learning techniques to build predictive models. • Collaborate with cross-functional teams to integrate AI solutions into existing systems. • Stay updated with the latest advancements in AI technologies, including ChatGPT, Gemini, Claude, and Big Data solutions. • Write clean, maintainable, and efficient code when required. • Handle large datasets and perform big data analysis to extract valuable insights. • Fine-tune pre-trained LLMs using specific type of data and ensure optimal performance. • Proficiency in cloud services from Amazon AWS • Extract and parse text from CVs, application forms, and job descriptions using advanced NLP techniques such as Word2Vec, BERT, and GPT-NER. • Develop similarity functions and matching algorithms to align candidate skills with job requirements. • Experience with microservices, Flask, FastAPI, Node.js. • Expertise in Spark, PySpark for big data processing. • Knowledge of advanced techniques such as SVD/PCA, LSTM, NeuralProphet. • Apply debiasing techniques to ensure fairness and accuracy in the ML pipeline. • Experience in coordinating with clients to understand their needs and delivering AI solutions that meet their requirements. Qualifications: • Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. • In-depth knowledge of NLP techniques and libraries, including Word2Vec, BERT, GPT, and others. • Experience with database technologies and vector representation of data. • Familiarity with similarity functions and distance metrics used in matching algorithms. • Ability to design and implement custom ontologies and classification models. • Excellent problem-solving skills and attention to detail. • Strong communication and collaboration skills.

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Role Overview: You are a seasoned Data Science Engineer responsible for spearheading the development of intelligent, autonomous AI systems. Your role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Key Responsibilities: - Design & Develop Agentic AI Applications: Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. - Implement RAG Pipelines: Integrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. - Fine-Tune Language Models: Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. - NER Models: Train OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP). - Develop Knowledge Graphs: Construct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. - Collaborate Cross-Functionally: Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. - Optimise AI Workflows: Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Qualifications Required: - 8+ years of professional experience in AI/ML development, with a focus on agentic AI systems. - Proficient in Python, Python API frameworks, SQL, and familiar with AI/ML frameworks such as TensorFlow or PyTorch. - Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). - Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. - Familiarity with MLOps tools and practices for continuous integration and deployment. - Skilled in building and querying knowledge graphs using tools like Neo4j. - Hands-on experience with vector databases and embedding techniques. - Familiarity with RAG architectures and hybrid search methodologies. - Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. - Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously.,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana, india

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data. A knowledge of Medallion Architecture will be an added advantage. Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases. Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems. Implement and maintain data ontologies to support semantic interoperability and consistent data classification. Collaborate with architects to integrate ontology models with metadata repositories and business glossaries. Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping. Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Apply data engineering best practices including CI/CD, version control, and code modularity. Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience Must have Skills: Bachelor’s or master’s degree in computer science, Data Science, or a related field. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Strong programming skills in Python, PySpark, and SQL. Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic). Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL). Familiarity with SPARQL and/or Cypher for querying semantic and property graphs. Experience working with cloud data services (Azure, AWS, or GCP). Strong understanding of data modeling, entity relationships, and semantic interoperability. Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Graph-DB related certifications Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data. A knowledge of Medallion Architecture will be an added advantage. Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases. Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems. Implement and maintain data ontologies to support semantic interoperability and consistent data classification. Collaborate with architects to integrate ontology models with metadata repositories and business glossaries. Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping. Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Apply data engineering best practices including CI/CD, version control, and code modularity. Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience Must have Skills: Bachelor’s or master’s degree in computer science, Data Science, or a related field. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Strong programming skills in Python , PySpark , and SQL . Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic). Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL). Familiarity with SPARQL and/or Cypher for querying semantic and property graphs. Experience working with cloud data services (Azure, AWS, or GCP). Strong understanding of data modeling , entity relationships , and semantic interoperability . Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Graph-DB related certifications Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

hyderābād

On-site

Summary Assist in the timely & professional ongoing Mgmt of data Operations on Use Cases/Demand deliverables and of clinical data warehouse maintenance with respect to cost, quality and timelines within Clinical Pipeline team. Ensure high quality data available for secondary analysis use. Support content development and upgrade to training modules into engaging and interactive applications. Follows data regulations and laws, data-handling procedures and data mapping guidelines. Supports quality deliverables within Clinical Data Operations (DO). Manage data Load, Transfer from Novartis Clinical Data Lake and conform of Clinical trial data to SDTM/ADaM compliant standards within the Clinical Data Warehouse. Supports the delivery of quality data, processes and documentation contributor role in ensuring that use case/demands are executed efficiently with timely and high quality deliverables. About the Role Major accountabilities: Demonstrates potential for technical proficiency, scientific creativity, collaboration with others and independent thought. Under supervision provides input into writing specifications for use cases/demand and necessary reports to ensure high quality and consistent data -Involved in User acceptance testing (UAT) and managing data mapping activities to maintain Clinical Data Warehouse -Under supervision, participates in ongoing review of all data generated from different sources -Supports the development of communications for initiatives. Perform hands on activities to conduct data quality assessments. Creates under supervision and learns relevant data dictionaries, ontologies and vocabularies -Reporting of technical complaints / special case scenarios related to Novartis data Collaborate with other data engineering teams to ensure consistent CDISC based data standards applied Be familiar with all clinical study documents from protocol to CSR including Data Management and Biostatistic documents. Key performance indicators: Achieve high level of quality, timeliness, cost efficiency and customer satisfaction across Clinical Data Operations activities & deliverables. No critical data findings due to Data Operations-Adherence to Novartis policy, data standards and guidelines -Customer / partner/ project feedback and satisfaction Minimum Requirements: Work Experience: 3-5 years of experience in working in clinical trials data reporting Collaborating across boundaries. Knowledge of clinical data Availability of sufficient information to find and understand data Availability of data quality assessments Experience in Agile way of working would be a plus Skills: CDISC SDTM/ADaM Mapping Clinical Data Management. Experience in being able to work with different legacy, historical, local data standards SQL basic knowledges Python skills would be a plus Able to work in a worldwide team Data Privacy Data Operations. Data Science. Databases. Detail Oriented. Languages: English. Skills Desired Clinical Data Management, Databases, Data Entry, Data Management, Data Science, Detail-Oriented Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Biomedical Research Business Unit Pharma Research Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Research & Development Job Type Full time Employment Type Regular Shift Work No Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. Please include the job requisition number in your message. Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve.

Posted 5 days ago

Apply

3.0 - 5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Summary Assist in the timely & professional ongoing Mgmt of data Operations on Use Cases/Demand deliverables and of clinical data warehouse maintenance with respect to cost, quality and timelines within Clinical Pipeline team. Ensure high quality data available for secondary analysis use. Support content development and upgrade to training modules into engaging and interactive applications. Follows data regulations and laws, data-handling procedures and data mapping guidelines. Supports quality deliverables within Clinical Data Operations (DO). Manage data Load, Transfer from Novartis Clinical Data Lake and conform of Clinical trial data to SDTM/ADaM compliant standards within the Clinical Data Warehouse. Supports the delivery of quality data, processes and documentation contributor role in ensuring that use case/demands are executed efficiently with timely and high quality deliverables. About The Role Major accountabilities: Demonstrates potential for technical proficiency, scientific creativity, collaboration with others and independent thought. Under supervision provides input into writing specifications for use cases/demand and necessary reports to ensure high quality and consistent data -Involved in User acceptance testing (UAT) and managing data mapping activities to maintain Clinical Data Warehouse -Under supervision, participates in ongoing review of all data generated from different sources -Supports the development of communications for initiatives. Perform hands on activities to conduct data quality assessments. Creates under supervision and learns relevant data dictionaries, ontologies and vocabularies -Reporting of technical complaints / special case scenarios related to Novartis data Collaborate with other data engineering teams to ensure consistent CDISC based data standards applied Be familiar with all clinical study documents from protocol to CSR including Data Management and Biostatistic documents. Key Performance Indicators Achieve high level of quality, timeliness, cost efficiency and customer satisfaction across Clinical Data Operations activities & deliverables. No critical data findings due to Data Operations-Adherence to Novartis policy, data standards and guidelines -Customer / partner/ project feedback and satisfaction Minimum Requirements Work Experience: 3-5 years of experience in working in clinical trials data reporting Collaborating across boundaries. Knowledge of clinical data Availability of sufficient information to find and understand data Availability of data quality assessments Experience in Agile way of working would be a plus Skills CDISC SDTM/ADaM Mapping Clinical Data Management. Experience in being able to work with different legacy, historical, local data standards SQL basic knowledges Python skills would be a plus Able to work in a worldwide team Data Privacy Data Operations. Data Science. Databases. Detail Oriented. Languages English. Skills Desired Clinical Data Management, Databases, Data Entry, Data Management, Data Science, Detail-Oriented Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards

Posted 6 days ago

Apply

2.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Position Title: Data Scientist Company Overview: Capgemini Engineering is a global leader in engineering services, bringing together a worldwide team of engineers, scientists, and architects to assist the most innovative companies in unleashing their potential. Position Overview: We are seeking a skilled Data Scientist with expertise in Cognite Data Fusion, data modelling, Unified Namespace (UNS), ontologies, and the identification of data products and datasets. The ideal candidate will have a strong background in developing and implementing data science projects, analyzing large and complex data sets, and driving data-driven decision-making across the organization Key Responsibilities: Solution Development: Design, implement, and deploy scalable data solutions utilizing Cognite Data Fusion, focusing on data modeling, UNS, and ontologies to address industry-specific challenges.​ Data Analysis: Analyze large and complex data sets to identify trends, insights, and opportunities, supporting solution development and business strategies.​ Collaboration: Collaborate with cross-functional teams to understand data needs and translate them into data science solutions, ensuring seamless integration and operationalization of digital solutions across various domains. Client Engagement: Engage with clients to understand their business objectives, lead discovery workshops, and provide expert guidance on data-driven strategies and potential challenges.​ Visualization: Develop dashboards and visualizations using tools such as Power BI, Grafana, or web development frameworks like Plotly Dash and Streamlit to effectively communicate data insights.​ Mentorship: Provide guidance and mentorship to junior team members, promoting best practices in data science and software development.​ Qualifications: Educational Background: Master’s or PhD degree in a quantitative field.​ Experience: Minimum of 2 years of experience in data science, with a strong background in developing analytical solutions within domains such as pharma, oil and gas, manufacturing, or power & utilities.​ Technical Skills: Proficiency in Python and its data ecosystem (pandas, numpy), machine learning libraries (scikit-learn, keras), and experience with SQL.​ Visualization Tools: Experience with data visualization tools like Power BI, Grafana, Tableau, or web development frameworks such as Plotly Dash and Streamlit.​ Software Practices: Strong understanding of software development practices, including version control (e.g., Git), automated testing, and documentation.​ Cloud Platforms: Experience with cloud services such as GCP, Azure, or AWS is advantageous.​ Domain Knowledge: Familiarity with industrial data management concepts, including Unified Namespace (UNS), ontologies, and data product identification.​ Communication Skills: Excellent communication and collaboration skills, with the ability to work with cross-functional teams and stakeholders.​ Leadership: Demonstrated ability to lead projects and mentor junior team members.​ Preferred Qualifications: Industry Expertise: Experience serving as a domain expert on internal or customer projects within relevant industries.​ Cloud Deployment: Experience deploying models and solutions in production environments using cloud infrastructure.​ Continuous Learning: Willingness to stay updated with the latest developments in data science and related technologies.​

Posted 6 days ago

Apply

13.0 - 17.0 years

0 Lacs

karnataka

On-site

As a Data Science Engineer at the VP level in Bangalore, India, you will be responsible for leading the development of intelligent, autonomous AI systems. Your role will involve designing and deploying AI solutions that leverage technologies such as Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. You will have the opportunity to design and develop agentic AI applications using frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Additionally, you will implement RAG pipelines by integrating LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. You will also be responsible for fine-tuning language models, training NER models, developing knowledge graphs, collaborating cross-functionally, and optimizing AI workflows. To be successful in this role, you should have at least 13 years of professional experience in AI/ML development, proficiency in Python, Python API frameworks, SQL, and familiarity with frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS) and knowledge of LLMs, SLMs, semantic technologies, ontologies, and MLOps tools and practices is essential. Additionally, you should be skilled in building and querying knowledge graphs, have hands-on experience with vector databases and embedding techniques, and be familiar with RAG architectures and hybrid search methodologies. You will receive support through training and development opportunities, coaching from experts in your team, and a culture of continuous learning to aid in your progression. The company promotes a positive, fair, and inclusive work environment, welcoming applications from all individuals. Join us at Deutsche Bank Group, where we strive for excellence together every day. For more information about our company and teams, please visit our website: https://www.db.com/company/company.htm,

Posted 1 week ago

Apply

0 years

0 Lacs

gurugram, haryana, india

On-site

Position Summary: We are seeking a Factory Digital Twin Front End and AI Data Solutions Engineer to join our dynamic development team. In this role, you will drive the development and implementation of Factory Digital Twins, supporting the Digital Governance and Excellence Team. The ideal candidate should have a strong background in engineering or computer science, with experience in web application development, database management, and AI algorithms. A Snapshot of your Day How You’ll Make An Impact (responsibilities Of Role) Development and maintenance of web-applications that will improve the user-interaction with the application Factory Digital Twins Simulation (FDTS). Development of structures for data and knowledge like glossaries, taxonomies and ontologies, knowledge graphs as well as modelling of logical constraints and rules in combination with LLM. Cooperation within the Team responsible for the implementation of FDTS on AWS-cloud and rollout in the factories. Analysis of factory data related to the three domains Products, Process and Resources (PPR) and development of ETL (Extraction, Transformation and Loading) Pipelines to generate a standardized input for FDTS. Realization of simulation analytics, automated derivation of recommendations and measures as well as optimization of brownfield or greenfield factories and the manufacturing network. Development of AI algorithms for analytics, explanation, generation of improvement scenarios, and optimization of factories and manufacturing network toward defined situation and business goal functions. What You Bring (required Qualification And Skill Sets) Successful university degree in Engineering, Automation, Computer Science, Mathematics, Physics, or similar. Extensive experience in development of applications (Web-UI), databases and graphs, especially in combination with LLM Working knowledge in development of ETL-Pipelines and development of dashboards for data analytics and visualisation. Understanding and experience in Multi-AI-Agent-Systems for optimization purposes. Understanding and experience in 3D Deep Learning is an advantage. Profound understanding and experience in factory domains, production, product structures and manufacturing processes and technologies, especially in factory planning. Proficiency in English and readiness to travel globally is preferable.

Posted 1 week ago

Apply

0 years

0 Lacs

pune, maharashtra, india

On-site

Position Summary: We are seeking a Factory Digital Twin Front End and AI Data Solutions Engineer to join our dynamic development team. In this role, you will drive the development and implementation of Factory Digital Twins, supporting the Digital Governance and Excellence Team. The ideal candidate should have a strong background in engineering or computer science, with experience in web application development, database management, and AI algorithms. A Snapshot of your Day How You’ll Make An Impact (responsibilities Of Role) Development and maintenance of web-applications that will improve the user-interaction with the application Factory Digital Twins Simulation (FDTS). Development of structures for data and knowledge like glossaries, taxonomies and ontologies, knowledge graphs as well as modelling of logical constraints and rules in combination with LLM. Cooperation within the Team responsible for the implementation of FDTS on AWS-cloud and rollout in the factories. Analysis of factory data related to the three domains Products, Process and Resources (PPR) and development of ETL (Extraction, Transformation and Loading) Pipelines to generate a standardized input for FDTS. Realization of simulation analytics, automated derivation of recommendations and measures as well as optimization of brownfield or greenfield factories and the manufacturing network. Development of AI algorithms for analytics, explanation, generation of improvement scenarios, and optimization of factories and manufacturing network toward defined situation and business goal functions. What You Bring (required Qualification And Skill Sets) Successful university degree in Engineering, Automation, Computer Science, Mathematics, Physics, or similar. Extensive experience in development of applications (Web-UI), databases and graphs, especially in combination with LLM Working knowledge in development of ETL-Pipelines and development of dashboards for data analytics and visualisation. Understanding and experience in Multi-AI-Agent-Systems for optimization purposes. Understanding and experience in 3D Deep Learning is an advantage. Profound understanding and experience in factory domains, production, product structures and manufacturing processes and technologies, especially in factory planning. Proficiency in English and readiness to travel globally is preferable.

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

hyderābād

On-site

Summary Assist in the timely & professional ongoing Mgmt of data Operations on Use Cases/Demand deliverables and of clinical data warehouse maintenance with respect to cost, quality and timelines within Clinical Pipeline team. Ensure high quality data available for secondary analysis use. Support content development and upgrade to training modules into engaging and interactive applications. Follows data regulations and laws, data-handling procedures and data mapping guidelines. Supports quality deliverables within Clinical Data Operations (DO). Manage data Load, Transfer from Novartis Clinical Data Lake and conform of Clinical trial data to SDTM/ADaM compliant standards within the Clinical Data Warehouse. Supports the delivery of quality data, processes and documentation contributor role in ensuring that use case/demands are executed efficiently with timely and high quality deliverables. About the Role Major accountabilities: Demonstrates potential for technical proficiency, scientific creativity, collaboration with others and independent thought. Under supervision provides input into writing specifications for use cases/demand and necessary reports to ensure high quality and consistent data -Involved in User acceptance testing (UAT) and managing data mapping activities to maintain Clinical Data Warehouse -Under supervision, participates in ongoing review of all data generated from different sources -Supports the development of communications for initiatives. Perform hands on activities to conduct data quality assessments. Creates under supervision and learns relevant data dictionaries, ontologies and vocabularies -Reporting of technical complaints / special case scenarios related to Novartis data Collaborate with other data engineering teams to ensure consistent CDISC based data standards applied Be familiar with all clinical study documents from protocol to CSR including Data Management and Biostatistic documents. Key performance indicators: Achieve high level of quality, timeliness, cost efficiency and customer satisfaction across Clinical Data Operations activities & deliverables. No critical data findings due to Data Operations-Adherence to Novartis policy, data standards and guidelines -Customer / partner/ project feedback and satisfaction Minimum Requirements: Work Experience: 3-5 years of experience in working in clinical trials data reporting Collaborating across boundaries. Knowledge of clinical data Availability of sufficient information to find and understand data Availability of data quality assessments Experience in Agile way of working would be a plus Skills: CDISC SDTM/ADaM Mapping Clinical Data Management. Experience in being able to work with different legacy, historical, local data standards SQL basic knowledges Python skills would be a plus Able to work in a worldwide team Data Privacy Data Operations. Data Science. Databases. Detail Oriented. Languages: English. Skills Desired Clinical Data Management, Databases, Data Entry, Data Management, Data Science, Detail-Oriented Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Biomedical Research Business Unit Pharma Research Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Research & Development Job Type Full time Employment Type Regular Shift Work No Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. Please include the job requisition number in your message. Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Summary Assist in the timely & professional ongoing Mgmt of data Operations on Use Cases/Demand deliverables and of clinical data warehouse maintenance with respect to cost, quality and timelines within Clinical Pipeline team. Ensure high quality data available for secondary analysis use. Support content development and upgrade to training modules into engaging and interactive applications. Follows data regulations and laws, data-handling procedures and data mapping guidelines. Supports quality deliverables within Clinical Data Operations (DO). Manage data Load, Transfer from Novartis Clinical Data Lake and conform of Clinical trial data to SDTM/ADaM compliant standards within the Clinical Data Warehouse. Supports the delivery of quality data, processes and documentation contributor role in ensuring that use case/demands are executed efficiently with timely and high quality deliverables. About The Role Major accountabilities: Demonstrates potential for technical proficiency, scientific creativity, collaboration with others and independent thought. Under supervision provides input into writing specifications for use cases/demand and necessary reports to ensure high quality and consistent data -Involved in User acceptance testing (UAT) and managing data mapping activities to maintain Clinical Data Warehouse -Under supervision, participates in ongoing review of all data generated from different sources -Supports the development of communications for initiatives. Perform hands on activities to conduct data quality assessments. Creates under supervision and learns relevant data dictionaries, ontologies and vocabularies -Reporting of technical complaints / special case scenarios related to Novartis data Collaborate with other data engineering teams to ensure consistent CDISC based data standards applied Be familiar with all clinical study documents from protocol to CSR including Data Management and Biostatistic documents. Key Performance Indicators Achieve high level of quality, timeliness, cost efficiency and customer satisfaction across Clinical Data Operations activities & deliverables. No critical data findings due to Data Operations-Adherence to Novartis policy, data standards and guidelines -Customer / partner/ project feedback and satisfaction Minimum Requirements Work Experience: 3-5 years of experience in working in clinical trials data reporting Collaborating across boundaries. Knowledge of clinical data Availability of sufficient information to find and understand data Availability of data quality assessments Experience in Agile way of working would be a plus Skills CDISC SDTM/ADaM Mapping Clinical Data Management. Experience in being able to work with different legacy, historical, local data standards SQL basic knowledges Python skills would be a plus Able to work in a worldwide team Data Privacy Data Operations. Data Science. Databases. Detail Oriented. Languages English. Skills Desired Clinical Data Management, Databases, Data Entry, Data Management, Data Science, Detail-Oriented Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards

Posted 1 week ago

Apply

1.0 years

0 Lacs

gurugram, haryana, india

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: Clinical Document Authoring Skills : Protocol , Informed Consent Form (ICF) , FDA , EMA , ICH-GCP Job Locations: Bengaluru , Gurugram Experience: 1 -5Years Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview Job Description: Key Responsibilities: Author and analyze clinical trial documents. Work with key clinical documents: Protocol, Informed Consent Form, Clinical Study Report, Summary of Clinical Safety/Efficacy, Access Evidence Dossier, Statistical Analysis Plan and more. Create, validate, and refine prompts for AI-assisted document generation. Apply knowledge of clinical trial phases, study design, and drug development. Maintain compliance with global regulatory standards (FDA, EMA, ICH-GCP). Utilize medical terminologies and ontologies for clarity and consistency. Ensure quality control and timely delivery of assigned tasks. Collaborate with cross-functional teams to improve document accuracy and prompt effectiveness. Provide regular updates and flag risks to the project manager. Interested Candidates please share your CV to vamsi.v@people-prime.com

Posted 1 week ago

Apply

1.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Data Quality Analyst Function: Data Harmonization Location: Delhi/Bangalore About the Role: We are seeking a detail-oriented Data Quality Analyst to join our delivery team. This role requires expertise in AI-assisted curation, API scrapping, data modelling and data schema creation & manipulation. A strong understanding of multi-omics data types, like transcriptomics, proteomics and varied types of single-cell omics technologies is important for this role. Acquaintance with programming languages like Python and with clinical data standards such as OMOP, FHIR, and HL7 is a must. If you are eager to grow in the field of omics data, and work on cutting-edge AI technology, this role is an excellent opportunity for you. Key Responsibilities: Identify, curate and annotate biological datasets from various omics/clinical modalities, including transcriptomics, proteomics, H&E, clinical trials and single-cell data using APIs and AI tools. Automate curation of large interaction datasets decreasing curator workload. Build fit for purpose conceptual & logical data models. Have working knowledge of data models, use-case specific schema and standardize biological data using established ontologies and guidelines. Ensure high-quality curation and consistency across multiple sources to deliver reliable results. Inter and Intra-team communication Work with cross-functional teams to gather requirements and ensure alignment on biological data curation standards. Provide clear and timely updates to stakeholders on the progress of data curation tasks, ensuring transparency and alignment with project goals. Liaison between PMs, developers as a member of a cross-functional team to add newer capabilities to our curation workflow. Qualifications : A degree in Life Sciences, Bioinformatics, or a related field (Master's preferred), with familiarity in bioinformatics tools and databases for data curation. Experience with programming languages like Python is a must. Skills: Git, basics of prompt engineering, API scrapping & LLM for scientific curation. Acquaintance with healthcare/clinical data exchange standards (OMOP, FHIR, HL7). Excellent interpersonal and communication skills, both written and verbal. A highly motivated, dynamic, and detail-oriented individual with good data curation and management skills. Good To Have 1 year of curation experience, preferably in the life sciences industry. Hands-on experience using generative AI platforms and tools (e.g., ChatGPT, Gemini, Claude, or similar).

Posted 1 week ago

Apply

1.0 years

0 Lacs

gurugram, haryana, india

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: Clinical Document Authoring Skills : Protocol , Informed Consent Form (ICF) , FDA , EMA , ICH-GCP Job Locations: Bengaluru , Gurugram Experience: 1 -5Years Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview Job Description: Key Responsibilities: Author and analyze clinical trial documents. Work with key clinical documents: Protocol, Informed Consent Form, Clinical Study Report, Summary of Clinical Safety/Efficacy, Access Evidence Dossier, Statistical Analysis Plan and more. Create, validate, and refine prompts for AI-assisted document generation. Apply knowledge of clinical trial phases, study design, and drug development. Maintain compliance with global regulatory standards (FDA, EMA, ICH-GCP). Utilize medical terminologies and ontologies for clarity and consistency. Ensure quality control and timely delivery of assigned tasks. Collaborate with cross-functional teams to improve document accuracy and prompt effectiveness. Provide regular updates and flag risks to the project manager. Interested Candidates please share your CV to sushma.n@people-prime.com

Posted 1 week ago

Apply

10.0 years

0 Lacs

secunderābād, telangana, india

On-site

Overview Certara accelerates medicines using proprietary biosimulation software, technology, and services to transform traditional drug discovery and development. Its clients include more than 2,000 biopharmaceutical companies, academic institutions and regulatory agencies across 62 countries. The Associate Director, Data Sciences, leads the development, quality assurance, and delivery of clinical trial outcome databases across therapeutic areas. This role combines strategic planning, technical expertise, project management, and stakeholder engagement to support evidence synthesis, meta-analysis, and data-driven insights. Responsibilities Database & Quality Assurance: Oversee database development, curation, QC, and deployment to platform (e.g., CODEX); ensure accurate data extraction from literature and plots; validate QC documentation and manage change impact. Technical Expertise: Conduct systematic literature reviews using PICOS, ontologies, and data standards; refine search strategies; apply clinical trial knowledge; perform R-based quality checks and exploratory analyses. Project Management: Lead planning and delivery of database products; manage resources, timelines, and change controls; ensure clear communication with clients and internal teams. Team Leadership: Mentor teams; drive process improvement, agile practices, and learning initiatives; support hiring and onboarding. Stakeholder Collaboration: Align with clients and internal teams on expectations; review project metrics and identify enhancements. Continuous Learning: Stay current with advancements in statistics, R, SLR, and data science; contribute to innovation. Qualifications Master’s or PhD in Pharmacology, Pharmaceutical Sciences, or related fields. 10+ years of experience in SLR, clinical trial databases, or health economics (6+ years for PhD holders). Strong knowledge of clinical research, trial design, PICOS, and pharma data standards. Proficiency in R; knowledge of Python and statistical methods is a plus. Key Competencies Collaboration Communication Mentorship Adaptability Client Focus Certara bases all employment-related decision on merit, taking into consideration qualifications, skills, achievement, and performance. We treat all applicants and employees without regard to personal characteristics such as race, color, ethnicity, religion, sex, sexual orientation, age, nationality, marital status, pregnancy, physical or mental condition, genetic information, military service, or other characteristic protected by law.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana, india

On-site

Let’s do this. Let’s change the world. In this vital role you will support the Research Benchtop Team by designing, building, and maintaining software tools and applications that are critical to the operation of research lab environments. These applications support a wide range of capabilities including inventory tracking, client-facing ticket submission portals, and workflow tools that manage the lifecycle of Windows-based benchtop systems used by researchers. This role requires close collaboration with cross-functional stakeholders including researchers, IT infrastructure teams, and support personnel to ensure that applications are reliable, user-friendly, and adaptable to the evolving needs of scientific users. The engineer will also be responsible for maintaining legacy codebases, modernizing existing tools, integrating with third-party services (such as ServiceNow), and ensuring the systems meet compliance and support standards. In addition to development work, the role includes supporting incident response, performing automated testing, participating in code reviews, and contributing to ongoing platform documentation and technical improvements. Roles & Responsibilities: Design, write, review, and maintain clean, efficient code Using strong rapid prototyping skills, quickly translate concepts into working code. Design, build, and maintain versioned APIs and services Design, deploy and manage labeling content solution infrastructure using Infrastructure-as-Code, including but not limited to cloud services like AWS RDS, DynamoDB, S3, Lambda, and ECS. Design, develop, execute, and maintain unit tests, integration tests, and other testing strategies to ensure the quality of the software. Assist with the ongoing maintenance of the system, including bug fixes, system upgrades, and enhancements. Participate in technical discussions, code and design reviews to help drive the architecture and implementation of new features. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Stay updated with the latest trends and technologies in content authoring and management, and related fields. Work closely with product team, business team, and other stakeholders. What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Strong technical background, including understanding software development processes, databases, and connected systems that require ontologies and data dictionaries to interoperate. Proficiency in NodeJS, Python, Javascript, and SQL programming with version control using Git, test automation, and CI/CD with Infrastructure-as-Code. Strong understanding of software development methodologies, including Agile and Scrum. Familiarity with Infrastructure-as-Code (IaC) tools and practices for reproducible environment management. Good-to-Have Skills: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes). Exposure to Managed Cloud Platforms (MCPs) and designing for cloud-native scalability. Understanding of enterprise systems integration, including ServiceNow API, SCCM, or similar automation platforms. Familiarity with CI/CD workflows, DevOps principles, and version-controlled development pipelines. Familiarity with communication protocols (MQTT, HTTP, Wi-Fi) to extract data from hardware and software platforms Experience with Integrating data flows from hardware to data management systems, IOT. Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com

Posted 1 week ago

Apply

1.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: Clinical Document Authoring Skills : Protocol , Informed Consent Form (ICF) , FDA , EMA , ICH-GCP Job Locations: Bengaluru , Gurugram Experience: 1 -5Years Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview Job Description: Key Responsibilities: Author and analyze clinical trial documents. Work with key clinical documents: Protocol, Informed Consent Form, Clinical Study Report, Summary of Clinical Safety/Efficacy, Access Evidence Dossier, Statistical Analysis Plan and more. Create, validate, and refine prompts for AI-assisted document generation. Apply knowledge of clinical trial phases, study design, and drug development. Maintain compliance with global regulatory standards (FDA, EMA, ICH-GCP). Utilize medical terminologies and ontologies for clarity and consistency. Ensure quality control and timely delivery of assigned tasks. Collaborate with cross-functional teams to improve document accuracy and prompt effectiveness. Provide regular updates and flag risks to the project manager. Interested Candidates please share your CV to hajeera.s@people-prime.com

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies