Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description NXP Semiconductors enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better and safer. As the world leader in secure connectivity solutions for embedded applications, we are driving innovation in the secure connected vehicle, end-to-end security & privacy and smart connected solutions markets. Organization Description Do you feel challenged by being part of the IT department of NXP, the company with a mission of “Secure Connections for a Smarter World”? Do you perform best in a role representing IT in projects in a fast moving, international environment? Within R&D IT Solutions, the Product Creation Applications (PCA) department is responsible for providing and supporting the R&D design community globally with best-in-class applications and support. The applications are used by over 6,000 designers. Job Summary As a Graph Engineer, you will: Develop pipelines and code to support the ingress and egress of this data to and from the knowledge graphs. Perform basic and advanced graph querying and data modeling on the knowledge graphs that lie at the heart of the organization's Product Creation ecosystem. Maintain the (ETL) pipelines, code and Knowledge Graph to stay scalable, resilient and performant in line with customer’s requirements. Work in an international and Agile DevOps environment. This position offers an opportunity to work in a globally distributed team where you will get a unique opportunity of personal development in a multi-cultural environment. You will also get a challenging environment to develop expertise in the technologies useful in the industry. Primary Responsibilities Translate requirements of business functions into “Graph-Thinking”. Build and maintain graphs and related applications from data and information, using latest graph technologies to leverage high value use cases. Support and manage graph databases. Integrate graph data from various sources – internal and external. Extract data from various sources, including databases, APIs, and flat files. Load data into target systems, such as data warehouses and data lakes. Develop code to move data (ETL) from the enterprise platform applications into the enterprise knowledge graphs. Optimize ETL processes for performance and scalability. Collaborate with data engineers, data scientists and other stakeholders to model the graph environment to best represent the data coming from the multiple enterprise systems. Skills / Experience Semantic Web technologies: RDF RDFS, OWL, SHACL SPARQL JSON-LD, N-Triples/N-Quads, Turtle, RDF/XML, TriX API-led architectures REST, SOAP Microservices API Management Graph databases, such as Dydra, Amazon Neptune, Neo4J, Oracle Spatial & Graph is a plus Experience with other NoSQL databases, such as key-value databases and document-based databases (e.g. XML databases) is a plus Experience with relational databases Programming experience, preferably Java, JavaScript, Python, PL/SQL Experience with web technologies: HTML, CSS, XML, XSLT, XPath Experience with modelling languages such as UML Understanding of CI/CD automation, version control, build automation, testing frameworks, static code analysis, IT service management, artifact management, container management, and experience with related tools and platforms. Familiarity with Cloud computing concepts (e.g. in AWS and Azure). Education & Personal Skillsets A master’s or bachelor’s degree in the field of computer science, mathematics, electronics engineering or related discipline with at least 10 plus years of experience in a similar role Excellent problem-solving and analytical skills A growth mindset with a curiosity to learn and improve. Team player with strong interpersonal, written, and verbal communication skills. Business consulting and technical consulting skills. An entrepreneurial spirit and the ability to foster a positive and energized culture. You can demonstrate fluent communication skills in English (spoken and written). Experience working in Agile (Scrum knowledge appreciated) with a DevOps mindset. More information about NXP in India... Show more Show less
Posted 6 hours ago
3.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company: Indian / Global Engineering & Manufacturing Organization Key Skills: ETL/ELT, RDF, OWL, SPARQL, Neo4j, AWS Neptune, ArangoDB, Python, SQL, Cypher, Semantic Modeling, Cloud Data Pipelines, Data Quality, Knowledge Graph, Graph Query Optimization, Semantic Search. Roles and Responsibilities: Design and build advanced data pipelines for integrating structured and unstructured data into graph models. Develop and maintain semantic models using RDF, OWL, and SPARQL. Implement and optimize data pipelines on cloud platforms such as AWS, Azure, or GCP. Model real-world relationships through ontologies and hierarchical graph data structures. Work with graph databases such as Neo4j, AWS Neptune, ArangoDB for knowledge graph development. Collaborate with cross-functional teams including AI/ML and business analysts to support semantic search and analytics. Ensure data quality, security, and compliance throughout the pipeline lifecycle. Monitor, debug, and enhance performance of graph queries and data transformation workflows. Create clear documentation and communicate technical concepts to non-technical stakeholders. Participate in global team meetings and knowledge-sharing sessions to align on data standards and architectural practices. Experience Requirement: 3-8 years of hands-on experience in ETL/ELT engineering and data integration. Experience working with graph databases such as Neo4j, AWS Neptune, or ArangoDB. Proven experience implementing knowledge graphs, including semantic modeling using RDF, OWL, and SPARQL. Strong Python and SQL programming skills, with proficiency in Cypher or other graph query languages. Experience designing and deploying pipelines on cloud platforms (AWS preferred). Track record of resolving complex data quality issues and optimizing pipeline performance. Previous collaboration with data scientists and product teams to implement graph-based analytics or semantic search features. Education: Any Graduation. Show more Show less
Posted 1 day ago
0 years
0 Lacs
India
Remote
Employment type: Freelance, Project Based What this is about: At e2f, we offer an array of remote opportunities to work on compelling projects aimed at enhancing AI capabilities. As a significant team member, you will help shape the future of AI-driven solutions. We value your skills and domain expertise, offering competitive compensation and flexible working arrangements. Job Description: We are looking for an experienced Data Analyst with a strong background in SPARQL for a project-based position. The ideal candidate will be responsible for writing, reviewing, and optimizing the queries to extract valuable insights from our knowledge base. Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field. Proven experience with SPARQL Familiarity with Cypher query languages Expertise in Knowledge Graphs Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to prioritize and manage workload efficiently Understanding of and adherence to project guidelines and policies. Responsibilities: You can commit a minimum of 4 hours per day - Flexible schedule (You can split your hours as you prefer). Participate in a training meeting. Adhere to deadlines and guideline standards. What We Offer: Engage in exciting generative AI development from the convenience of your home. Enjoy flexible work hours and availability. If you're interested: Apply to our job advertisement. We'll review your profile and, if it aligns with our search, we will contact you as soon as possible to share rates and further details. About Us: e2f is dedicated to facilitating natural communication between people and machines across languages and cultures. With expertise in data science, we provide top-tier linguistic datasets for AI and NLP projects. Know more here: www.e2f.com Show more Show less
Posted 4 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Who We Are Euromonitor International is the leading independent market research company; investigating and understanding what consumers want and need, helping businesses create products and services that cater to their preferences and trends. We are an organisation that champions flexibility, with opportunity to grow and be supported with continuous learning and development. What You Will Be Doing The successful candidate will use and explore various innovative economic, econometric, and statistical modelling tools that best help us address our clients' strategic objectives. A typical modelling and analytics project starts with the identification and conceptualization of our clients' needs and objectives. This is then followed by an evaluation of all possible solutions by their feasibility given available data and the client's objectives. The final stage often involves the identification of the most optimal economic or econometric solution. As the data analyst gains experience in building client solutions, we will look to involve him/her in project opportunity screening and proposal build, developing modelling updates as well as driving client presentations and/or meetings. Key drivers - Research and analyze economic, demographic and industrial data from around the world. Use econometric modelling techniques as well as analytical judgment to come up with custom solutions for our clients. Participation in internal peer review meetings and contribution to the best solution search process as well as giving comments on already created preliminary solutions. Presentation and visualization of model results to a client in an intuitive manner Draw conclusions based on the analysis of our results. Monitoring academic press for latest developments in economics and statistics to make sure we use cutting edge analytical techniques. Proper documentation of each project and sharing best modelling practices Commission and organize research, standardization and modelling by freelance associates in Lithuania and around the world. Liaise with sales and marketing department to evaluate client inquiries. What You'll Need- Excellent communication skills and English fluency (both oral and writing) Understanding and interest in international economic, demographic and industry trends A good working knowledge of R is Mandatory Experience in SQL, SPARQL, JavaScript, HTML or similar would be an advantage. Excellent analytics skills Excellent organizational skills and creativity The confidence and ability to take the post forward with the minimum of supervision. A strong knowledge in mathematics /statistics/ econometrics A genuine interest in Artificial Intelligence and related fields like data science and Machine learning would be an advantage. knowledge in economics would be an advantage. M.A/MSc. degree in economics, statistics, econometrics, mathematics, physics, operation research or similar field. Highly skilled candidates with a BA/BSc. will also be considered. Candidates with B.tech/Mtech or MBA will not be considered What you'll get - Professional Development: Grow your career with opportunities within a consultative and professional environment Flexible Work Schedule: Achieve a healthy work-life balance with our flexible work schedule options, including remote work opportunities and flexible hours Positive Work Environment: Join a collaborative and inclusive workplace culture where your ideas are valued, diversity is celebrated, and teamwork is encouraged Community Involvement: Make a positive impact in the community through our volunteer programs, charitable initiatives, and corporate social responsibility efforts (and more....!) Our Values We act with integrity We are curious about the world We are stronger together We seek to empower We find strength in diversity Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Summary The person in this role will be the technical team lead and the point of contact between the PM, Architect and People leader. This person will work closely with the Product Owner to break down features into detailed, technical, work chunks that will be implemented by the team members. This person will oversee the detailed technical designs of the individual features. This person will need to fully understand the Modeling ecosystem and where it fits in the GridOS context. Job Description Roles and Responsibilities Serve as technical lead for the Modeling Development team: Single point of contact about technical development aspects for the Architect, PO, Scrum Master and Team Manager, owns onboarding and ramp-up processes for the team members, owns efficiency and quality of the development process. Responsible for the quality of the development in terms of software performances, code quality, test automation, code coverage, CI/CD and documentation. Oversee the detailed technical designs of the individual features. High level estimates of the different features of the products. Owns technical deliverables during the entire lifecycle of the products. Keep the products development lifecycle on track in terms of budget, time and quality. Keep track of developments happening within GridOS ecosystem and build bridges with other engineering and services teams. Interact with Services teams, and partner integrator teams, to provide processes to ensure best use of GridOS Modeling products and services. Effectively communicate both verbally and in writing with peers and team members as an inclusive team member. Serves as a technical leader or mentor on complex, integrated implementations within the GridOS Modeling product teams. Work in a self-directed fashion to proactively identify system problems, failures, and areas for improvement. Track issue resolution and document solutions implemented and create troubleshooting guides. Peer review of Pull Requests. Education Qualification For roles outside USA: Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with significant experience. For roles in USA: Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) Years of experience: 8+ years Desired Characteristics Technical Expertise Strong understanding of OOP concepts Strong experience with Kubernetes and microservices architectures Containers technology Strong expertise in JAVA and Python, Maven and Springboot framework REST API (OpenAPI) and event design GraphQL schemas & services design Graph technologies and frameworks: Apache Jena / Neo4J / GraphDB Experience with RDF and SPARQL Unit and integration tests design CI/CD pipelines designs JSON & YAML Schemas Events driven architecture Data streaming technologies such as Apache Kafka Microservice observability and metrics Integration skills Autonomous and able to work asynchronously (due to time zone difference) Software & API documentation Good to have Data engineering and data architecture expertise Apache Camel & Apache Arrow Experience in Grid or Energy software business (AEMS / ADMS / Energy Markets / SCADA / GIS) Business Acumen Adept at navigating the organizational matrix; understanding people's roles, can foresee obstacles, identify workarounds, leverage resources and rally teammates. Understand how internal and/or external business model works and facilitate active customer engagement Able to articulate the value of what is most important to the business/customer to achieve outcomes Able to produce functional area information in sufficient detail for cross-functional teams to utilize, using presentation and storytelling concepts. Possess extensive knowledge of full solution catalog within a business unit and proficiency in discussing each area at an advanced level. Six Sigma Green Belt Certification or equivalent quality certification. Leadership Demonstrated working knowledge of internal organization Foresee obstacles, identify workarounds, leverage resources, rally teammates. Demonstrated ability to work with and/or lead blended teams, including 3rd party partners and customer personnel. Demonstrated Change Management /Acceleration capabilities Strong interpersonal skills, including creativity and curiosity with ability to effectively communicate and influence across all organizational levels Proven analytical and problem resolution skills Ability to influence and build consensus with other Information Technology (IT) teams and leadership. Note To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years' experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used. This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager. Additional Information Relocation Assistance Provided: No Show more Show less
Posted 1 week ago
5.0 years
4 - 5 Lacs
Hyderābād
On-site
Lead Knowledge Engineer Hyderabad, India Data Management 311636 Job Description About The Role: Grade Level (for internal use): 11 The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. What’s in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. It’s a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data / software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management) S&P Global Enterprise Data Organization is a unified, cross-divisional team focused on transforming S&P Global’s data assets. We streamline processes and enhance collaboration by integrating diverse datasets with advanced technologies, ensuring efficient data governance and management. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 10 - Officials or Managers (EEO-2 Job Categories-United States of America), DTMGOP103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311636 Posted On: 2025-05-14 Location: Hyderabad, Telangana, India
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
Hyderabad
Work from Office
We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins Data Science Specialization on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or OReilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.
Posted 1 week ago
3.0 - 5.0 years
37 - 45 Lacs
Bengaluru
Work from Office
: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 1 week ago
3.0 - 7.0 years
5 - 8 Lacs
Hyderabad
Work from Office
The Role : The Knowledge Engineering team are seeking a Lead Knowledge Engineer to support our strategic transformation from a traditional data organization into a next generation interconnected data intelligence organization. The Team : The Knowledge Engineering team within data strategy and governance helps to lead fundamental organizational and operational change driving our linked data, open data, and data governance strategy, both internally and externally. The team partners closely with data and software engineering to envision and build the next generation of data architecture and tooling with modern technologies. The Impact : Knowledge Engineering efforts occur within the broader context of major strategic initiatives to extend market leadership and build next-generation data, insights and analytics products that are powered by our world class datasets. Whats in it for you : The Lead Knowledge Engineer role is an opportunity to work as an individual contributor in creatively solving complex challenges alongside visionary leadership and colleagues. Its a role with highly visible initiatives and outsized impact. The wider division has a great culture of innovation, collaboration, and flexibility with a focus on delivery. Every person is respected and encouraged to be their authentic self. Responsibilities : Develop, implement, and continue to enhance ontologies, taxonomies, knowledge graphs, and related semantic artefacts for interconnected data, as well as topical/indexed query, search, and asset discovery Design and prototype data software engineering solutions enabling to scale the construction, maintenance and consumption of semantic artefacts and interconnected data layer for various application contexts Provide thought leadership for strategic projects ensuring timelines are feasible, work is effectively prioritized, and deliverables met Influence the strategic semantic vision, roadmap, and next-generation architecture Execute on the interconnected data vision by creating linked metadata schemes to harmonize semantics across systems and domains Analyze and implement knowledge organization strategies using tools capable of metadata management, ontology management, and semantic enrichment Influence and participate in governance bodies to advocate for the use of established semantics and knowledge-based tools Qualifications: Able to communicate complex technical strategies and concepts in a relatable way to both technical and non-technical stakeholders and executives to effectively persuade and influence 5+ years of experience with ontology development, semantic web technologies (RDF, RDFS, OWL, SPARQL) and open-source or commercial semantic tools (e.g., VocBench, TopQuadrant, PoolParty, RDFLib, triple stores); Advanced studies in computer science, knowledge engineering, information sciences, or related discipline preferred 3+ years of experience in advanced data integration with semantic and knowledge graph technologies in complex, enterprise-class, multi-system environment(s); skilled in all phases from conceptualization to optimization Programming skills in a mainstream programming language (Python, Java, JavaScript), with experience in utilizing cloud services (AWS, Google Cloud, Azure) is a great bonus Understanding of the agile development life cycle and the broader data management discipline (data governance, data quality, metadata management, reference and master data management)
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Essential Duties And Responsibilities Design, implement, and deploy solutions that are reliable, scalable, and perform at a high-level to meet the needs of our global clients. Following Agile practices and participating in planning games, code reviews & sprint demos. Actively participates in architectural discussions and ensures that designs follow the approved architectural patterns. Continually learn about new technologies, generate new ideas and improve the use of technology in the product. Supports production issues with related products. Job Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, MIS, or related field. Tasks and responsibilities: Development of new software and adaptation of existing software. Integrate local systems into the international environment. Take a proactive role during backlog-refinement (grooming) sessions on solutions for the requested requirements. Solving incidents. Analyze and improve the (backend) performance. Extradite software and documentation. Planning and reporting on progress in accordance with Agile. Experience: Minimum of 3-5 years of experience in the software industry Experience in working in agile teams using modern technologies like Java 8, Spring, REST webservices and different kind of datastores. Be able to adopt new technologies and concepts quickly and should always be interested in new upcoming tools and languages. Preferred experience: Frameworks like Spring, CXF, Hibernate Test Frameworks/Tools (JUnit, EasyMock, Approval testing, SoapUI) Atlassian Stack (BitBucket, Confluence, JIRA, Bamboo) Docker, Amazon ECS RDF triple stores / Graph databases, SPARQL Agile: Scrum, Kanban and DevOps Competencies: Strong collaboration and listening skills Excellent communication skills in English, both written and verbal Ability to work in a distributed, international, multicultural environment Responsive and flexible in handling critical support issues whenever they occur Strong analytical skills. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
15 - 25 Lacs
Pune
Work from Office
Responsibilities: Create and optimize complex SPARQL Protocol and RDF Query Language queries to retrieve and analyse data from graph databases. Develop graph-based applications and models to solve real-world problems and extract valuable insights from data. Design, develop, and maintain scalable data pipelines using Python rest apis get data from different cloud platforms. Create and optimize complex SPARQL queries to retrieve and analyze data from graph databases. Study and understand the nodes, edges, and properties in graphs, to represent and store data in relational databases. Qualifications: Strong proficiency in SparQL, and RDF query language, Python and Rest APIs. Experience with database technologies sql and sparql. Preferred Skills: Knowledge of cloud platforms like AWS, Azure, or GCP. Experience with version control systems like Github. Understanding of environments and deployment processes and cloud infrastructure.
Posted 2 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. The SAP HANA Database and Analytics Core engine team is looking for an intermediate, or senior developer to contribute to our Knowledge Graph Database System engine development. In this role, you will be designing, developing features, and maintaining our Knowledge Graph engine, which runs inside SAP HANA in-memory database. At SAP, all members of the engineering team, including management, are hands-on and close to the code. If you think you can thrive in such an environment, and you have the necessary skills and experience please do not hesitate to apply. What You’ll Do- As a developer, you will have the opportunity to: Contribute to hands-on coding, design, and architecture that is best suited for our team size and performance targets. Collaborate in a team environment that extends to colleagues in remote locations and from various lines of businesses within the company. Ability to communicate and guide other teams to construct best possible queries for their needs. Assess new technology, tool, and infrastructure to keep up with the rapid pace of change. Embrace lean and agile software development principles. Debug, troubleshoot and communicate with customers about their issues with their data models, and queries. Continually enhance existing skills and seek new areas for personal development. What You Bring- Bachelor’s degree or equivalent university education in computer science or engineering with 3-5 years of experience in developing enterprise class software. Experience in Development with modern C++. Knowledge of development of Database Internals like - Query Optimizer/Planner, Query Executor, System Management, Transaction Management, and/or Persistence. Knowledge of SQL, and Graph technologies like RDF/SPARQL. Knowledge of full SDLC and development of tests using Python or other tools. Experience designing and developing well-encapsulated, and object-oriented code. Solution-oriented and open minded. Manage collaboration with sister teams and partner resources in remote locations. High service and customer orientation Skilled in process optimization and drives for permanent change. Strong in analytical thinking/problem solving. Interpersonal skills: team player, proactive networking, results and execution oriented, motivated to work in an international and intercultural environment. Excellent oral and written communication skills and presentation skills MEET YOUR TEAM- The team is responsible for developing HANA Knowledge Graph, a high-performance graph analytics database system, made available to SAP customers, partners, and various internal groups as part of HANA Multi Model Database System. It is specifically designed for processing large-scale graph data and executing complex graph queries with high efficiency. HANA Knowledge Graph enables organizations to gain insights from their graph datasets, discover patterns, perform advanced graph analytics, and unlock the value of interconnected data. HANA Knowledge Graph utilizes massive parallel processing (MPP) architecture to leverage the power of distributed computing. It is built with W3C web standards specifications of graph data and query language – RDF and SPARQL. The various components of HANA Knowledge Graph System include – Storage, Data Load, Query Parsing, Query Planning and Optimization, Query Execution, Transaction Management, Memory Management, Network Communications, System Management, Data Persistence, Backup & Restore, Performance Tuning, etc. At SAP, HANA Knowledge Graph is set to play a critical role in the development of several AI products. Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 396628 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: . Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title: R&D Data Modeling Manager Associate Location: Hyderabad Sanofi is a global life sciences company committed to improving access to healthcare and supporting the people we serve throughout the continuum of care. From prevention to treatment, Sanofi transforms scientific innovation into healthcare solutions, in human vaccines, rare diseases, multiple sclerosis, oncology, immunology, infectious diseases, diabetes and cardiovascular solutions and consumer healthcare. More than 110,000 people in over 100 countries at Sanofi are dedicated to making a difference in patients’ daily lives, wherever they live and enabling them to enjoy a healthier life. As a company with a global vision of drug development and a highly regarded corporate culture, Sanofi is recognized as one of the best pharmaceutical companies in the world and is pioneering the application of Artificial Intelligence (AI) with a strong commitment to developing advanced data standards to increase reusability & interoperability and thus accelerate impact on global health. The R&D Data Office serves as a cornerstone of this effort. Our team is responsible for cross-R&D data strategy, governance, and management. We partner with Business and Digital and drive data needs across priority and transformative initiatives across R&D. Team members serve as advisors, leaders, and educators to colleagues and data professionals across the R&D value chain. As an integral team member, you will be responsible for defining how R&D's structured, semi-structured and unstructured data will be stored, consumed, integrated / shared and reported by different end users such as scientists, clinicians, and more. You will also be pivotal in developing sustainable mechanisms for ensuring data are FAIR (findable, accessible, interoperable, and reusable). Position Summary The primary responsibility of this position is to support semantic integration and data harmonization across pharmaceutical R&D functions. In this role, you will design and implement ontologies and controlled vocabularies that enable interoperability of scientific, clinical, and operational data. Your work will be critical in accelerating discovery, improving data reuse, and enhancing insights across the drug development lifecycle. Main Responsibilities Develop, maintain, and govern ontologies and semantic models for key pharmaceutical domains, including preclinical, clinical, regulatory, and translational research Design and implement controlled vocabularies and taxonomies to standardize terminology across experimental data, clinical trials, biomarkers, compounds, and regulatory documentation Collaborate with cross-functional teams including chemists, biologists, pharmacologists, data scientists, and IT architects to align semantic models with scientific workflows and data standards Map internal data sources to public ontologies and standards to ensure FAIR (Findable, Accessible, Interoperable, Reusable) data principles Leverage semantic web technologies and ontology tools to build knowledge representation frameworks Participate in ontology alignment, reasoning, and validation processes to ensure quality and logical consistency Document semantic assets, relationships, and governance policies to support internal education and external compliance Deliverables Domain-specific ontologies representing concepts such as drug discovery (e.g., compounds, targets, assays), preclinical and clinical studies, biomarkers, adverse events, pharmacokinetics / dynamics, mechanisms of action, and disease models built using OWL/RDF and aligned with public standards Controlled vocabularies & taxonomies for experimental conditions, cell lines, compound classes, endpoints, clinical trial protocols, etc. Semantic data models supporting the integration of heterogeneous data sources (e.g., lab systems, clinical trial data, external databases) Knowledge graphs or knowledge maps for semantic integration of structured data from internal R&D systems Mappings to public ontologies, standards, and external knowledge bases like: CDISC, MedDRA, LOINC, UMLS, SNOMED CT, RxNorm, UniProt, DrugBank, PubChem, NCBI Ontology documentation & governance artifacts, including ontology scope, design rationale, versioning documentation, and usage guidelines for internal stakeholders Validation reports and consistency checks, including outputs from reasoners or SHACL validation to ensure logical coherence and change impact assessments when modifying existing ontologies Training and stakeholder support materials: slide decks, workshops, and tutorials on using ontologies in data annotation, integration, and search Support for application developers embedding semantic layers About You Experience: 5+ years of experience in ontology engineering, data management, data analysis, data architecture, or another related field Proven experience in ontology engineering, Proven experience in ontology development within the biomedical or pharmaceutical domain Experience working with biomedical ontologies and standards (e.g., GO, BAO, EFO, ChEBI, NCBI Taxonomy, NCI Thesaurus, etc.) Familiarity with controlled vocabulary curation and knowledge graph construction. Demonstrated ability to understand end-to-end data use and business needs Knowledge and/or experience of Pharma R&D or life sciences data and data domains. Understanding of FAIR data principles, data governance, and metadata management Strong analytical problem-solving skills. Demonstrated strong attention to detail, quality, time management and customer focus Excellent written and oral communication skills. Strong networking, influencing, and negotiating skills and superior problem-solving skills Demonstrated willingness to make decisions and to take responsibility for such. Excellent interpersonal skills (team player) Knowledge and experience in ontology engineering and maintenance are required. Knowledge and experience with OWL, RDF, SKOS, and SPARQL Familiarity with ontology engineering tools (e.g., Protégé, CENtree, TopBraid Composer PoolParty), Familiarity with ontology engineering methodologies (e.g., NeOn, METHONTOLOGY, Uschold and King, Grüninger and Fox, etc.) Knowledge and experience in data modeling are highly desired. Experience with pharma R&D platforms, requirements gathering, system design, and validation/quality/compliance requirements Experience with hierarchical data models from conceptualization to implementation, bachelor’s in computer science, Information Science, Knowledge Engineering, or related; Masters or higher preferred Languages: English null Show more Show less
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Reference Data Management Senior Analyst who as the Reference Data Product team member of the Enterprise Data Management organization, will be responsible for managing and promoting the use of reference data, partnering with business Subject Mater Experts on creation of vocabularies / taxonomies and ontologies, and developing analytic solutions using semantic technologies . Roles & Responsibilities: Work with Reference Data Product Owner, external resources and other engineers as part of the product team Develop and maintain semantically appropriate concepts Identify and address conceptual gaps in both content and taxonomy Maintain ontology source vocabularies for new or edited codes Support product teams to help them leverage taxonomic solutions Analyze the data from public/internal datasets. Develop a Data Model/schema for taxonomy. Create a taxonomy in Semaphore Ontology Editor. Perform Bulk-import data templates into Semaphore to add/update terms in taxonomies. Prepare SPARQL queries to generate adhoc reports. Perform Gap Analysis on current and updated data Maintain taxonomies in Semaphore through Change Management process. Develop and optimize automated data ingestion / pipelines through Python/PySpark when APIs are available Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Identify and resolve complex data-related challenges Participate in sprint planning meetings and provide estimations on technical implementation. Basic Qualifications and Experience: Master’s degree with 6 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 8 years of experience in Business, Engineering, IT or related field OR Diploma with 9+ years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Knowledge of controlled vocabularies, classification, ontology and taxonomy Experience in ontology development using Semaphore, or a similar tool Hands on experience writing SPARQL queries on graph data Excellent problem-solving skills and the ability to work with large, complex datasets Understanding of data modeling, data warehousing, and data integration concepts Good-to-Have Skills: Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). Experience using cloud services such as AWS or Azure or GCP Experience working in Product Teams environment Knowledge of Python/R, Databricks, cloud data platforms Knowledge of NLP (Natural Language Processing) and AI (Artificial Intelligence) for extracting and standardizing controlled vocabularies. Strong understanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certificate preferred SAFe® Practitioner Certificate preferred Any Data Analysis certification (SQL, Python) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Reference Data Management Senior Analyst who as the Reference Data Product team member of the Enterprise Data Management organization, will be responsible for managing and promoting the use of reference data, partnering with business Subject Mater Experts on creation of vocabularies / taxonomies and ontologies, and developing analytic solutions using semantic technologies . Roles & Responsibilities: Work with Reference Data Product Owner, external resources and other engineers as part of the product team Develop and maintain semantically appropriate concepts Identify and address conceptual gaps in both content and taxonomy Maintain ontology source vocabularies for new or edited codes Support product teams to help them leverage taxonomic solutions Analyze the data from public/internal datasets. Develop a Data Model/schema for taxonomy. Create a taxonomy in Semaphore Ontology Editor. Perform Bulk-import data templates into Semaphore to add/update terms in taxonomies. Prepare SPARQL queries to generate adhoc reports. Perform Gap Analysis on current and updated data Maintain taxonomies in Semaphore through Change Management process. Develop and optimize automated data ingestion / pipelines through Python/PySpark when APIs are available Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Identify and resolve complex data-related challenges Participate in sprint planning meetings and provide estimations on technical implementation. Basic Qualifications and Experience: Master’s degree with 6 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 8 years of experience in Business, Engineering, IT or related field OR Diploma with 9+ years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Knowledge of controlled vocabularies, classification, ontology and taxonomy Experience in ontology development using Semaphore, or a similar tool Hands on experience writing SPARQL queries on graph data Excellent problem-solving skills and the ability to work with large, complex datasets Understanding of data modeling, data warehousing, and data integration concepts Good-to-Have Skills: Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). Experience using cloud services such as AWS or Azure or GCP Experience working in Product Teams environment Knowledge of Python/R, Databricks, cloud data platforms Knowledge of NLP (Natural Language Processing) and AI (Artificial Intelligence) for extracting and standardizing controlled vocabularies. Strong understanding of data governance frameworks, tools, and best practices Professional Certifications: Databricks Certificate preferred SAFe® Practitioner Certificate preferred Any Data Analysis certification (SQL, Python) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
40.0 years
5 - 8 Lacs
Hyderābād
On-site
India - Hyderabad JOB ID: R-216718 LOCATION: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: May. 30, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins’ “Data Science Specialization” on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or O’Reilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
6.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you are responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing (to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Leading and being hands-on for the technical design, development, testing, implementation, and support of data pipelines that load the data domains in the Enterprise Data Fabric and associated data services. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Be able to translate data models (ontology, relational) into physical designs that performant, maintainable, easy to use. Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and graph data stores ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Experience with ETL tools such as Apache Spark, Prophecy and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Able to take user requirements and develop data models for data analytics use cases. Good-to-Have Skills: Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Experience using graph databases such as Stardog , Marklogic , Neo4J , Allegrograph, etc. and writing SPARQL queries. Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com
Posted 2 weeks ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic, including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery, XPath, or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins’ “Data Science Specialization” on Coursera) Other Graph Certifications (optional But Beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or O’Reilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Employment type: Freelance, Project Based What this is about: At e2f, we offer an array of remote opportunities to work on compelling projects aimed at enhancing AI capabilities. As a significant team member, you will help shape the future of AI-driven solutions. We value your skills and domain expertise, offering competitive compensation and flexible working arrangements. Job Description: We are looking for an experienced Data Analyst with a strong background in SQL or SPARQL for a project-based position. The ideal candidate will be responsible for writing, reviewing, and optimizing the queries to extract valuable insights from our knowledge base. Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field. At least 3 years of proven experience with SQL Familiarity with SparQL and Cypher is a huge plus Experience in Knowledge Graphs Strong analytical and problem-solving skills Excellent communication and collaboration skills Ability to prioritize and manage workload efficiently Understanding of and adherence to project guidelines and policies. Responsibilities: You can commit a minimum of 4 hours per day - Flexible schedule (You can split your hours as you prefer). Participate in a training meeting. Adhere to deadlines and guideline standards. What We Offer: Engage in exciting generative AI development from the convenience of your home. Enjoy flexible work hours and availability. If you're interested: Apply to our job advertisement. We'll review your profile and, if it aligns with our search, we will contact you as soon as possible to share rates and further details. About Us: e2f is dedicated to facilitating natural communication between people and machines across languages and cultures. With expertise in data science, we provide top-tier linguistic datasets for AI and NLP projects. Know more here: www.e2f.com Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Description The role is based in Munich, Germany (this is not a remote opportunity). We offer immigration and relocation support. The vision of the Ontology Product Knowledge Team is to provide a standardized, semantically rich, easily discoverable, extensible, and universally applicable body of product knowledge that can be consistently utilized across customer shopping experiences, selling partner listing experiences and internal enrichment of product data. We aim to make product knowledge compelling, easy to use, and feature rich. Our work to build comprehensive product knowledge allows us to semantically understand a customer’s intent – whether that is a shopping mission or a seller offering products. We strive to make these experiences more intuitive for all customers. As an Ontologist, you work on a global team of knowledge builders to deliver world-class, intuitive, and comprehensive taxonomy and ontology models to optimize product discovery for Amazon web and mobile experiences. You collaborate with business partners and engineering teams to deliver knowledge-based solutions to enable product discoverability for customers. In this role, you will directly impact the customer experience as well as the company’s product knowledge foundation. Tasks And Responsibilities Develop logical, semantically rich, and extensible data models for Amazon's extensive product catalog Ensure our ontologies provide comprehensive domain coverage that are available for both human and machine ingestion and inference Create new schema using Generative Artificial Intelligence (generative AI) models Analyze website metrics and product discovery behaviors to make data-driven decisions on optimizing our knowledge graph data models globally Expand and refine the expansion of data retrieval techniques to utilize our extensive knowledge graph Contribute to team goal setting and future state vision Drive and coordinate cross-functional projects with a broad range of merchandisers, engineers, designers, and other groups that may include architecting new data solutions Develop team operational excellence programs, data quality initiatives and process simplifications Evangelize ontology and semantic technologies within and across teams at Amazon Develop and refine data governance and processes used by global Ontologists Mentor and influence peers Inclusive Team Culture: Our team has a global presence: we celebrate diverse cultures and backgrounds within our team and our customer base. We are committed to furthering our culture of inclusion, offering continuous access to internal affinity groups as well as highlighting diversity programs. Work/Life Harmony: Our team believes that striking the right balance between work and your outside life is key. Our work is not removed from everyday life, but instead is influenced by it. We offer flexibility in working hours and will work with you to facilitate your own balance between your work and personal life. Career Growth: Our team cares about your career growth, from your initial company introduction and training sessions, to continuous support throughout your entire career at Amazon. We recognize each team member as an individual, and we will build on your skills to help you grow. We have a broad mix of experience levels and tenures, and we are building an environment that celebrates knowledge sharing. Perks You will have the opportunity to support CX used by millions of customers daily and to work with data at a scale very few companies can offer. We have offices around the globe, and have the opportunity to be considered for global placement. You’ll receive on the job training and group development opportunities. Basic Qualifications Degree in Library Science, Information Systems, Linguistics or equivalent professional experience 5+ years of relevant work experience working in ontology and/or taxonomy roles Proven skills in data retrieval and data research techniques Ability to quickly understand complex processes and communicate them in simple language Experience creating and communicating technical requirements to engineering teams Ability to communicate to senior leadership (Director and VP levels) Experience with generative AI (e.g. creating prompts) Knowledge of Semantic Web technologies (RDF/s, OWL), query languages (SPARQL) and validation/reasoning standards (SHACL, SPIN) Knowledge of open-source and commercial ontology engineering editors (e.g. Protege, TopQuadrant products, PoolParty) Detail-oriented problem solver who is able to work in fast-changing environment and manage ambiguity Proven track record of strong communication and interpersonal skills Proficient English language skills Preferred Qualifications Master’s degree in Library Science, Information Systems, Linguistics or other relevant fields Experience building ontologies in the e-commerce and semantic search spaces Experience working with schema-level constructs (e.g. higher-level classes, punning, property inheritance) Proficiency in SQL, SPARQL Familiarity with software engineering life cycle Familiarity with ontology manipulation programming libraries Exposure to data science and/or machine learning, including graph embedding Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad - A85 Job ID: A2837060 Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Sadar, Uttar Pradesh, India
On-site
. Role Overview: We are seeking a motivated Junior AI Testing Engineer to join our team. In this role, you will support the testing of AI models and pipelines, with a special focus on data ingestion into knowledge graphs and knowledge graph administration. You will collaborate with data scientists, engineers, and product teams to ensure the quality, reliability, and performance of our AI-driven solutions. Key Responsibilities: AI Model & Pipeline Testing: Design and execute test cases for AI models and data pipelines, ensuring accuracy, stability, and fairness Knowledge Graph Ingestion: Support the development and testing of Python scripts for data extraction, transformation, and loading (ETL) into enterprise knowledge graphs Knowledge Graph Administration: Assist in maintaining, monitoring, and troubleshooting knowledge graph environments (e.g., Neo4j, RDF stores), including user access and data integrity. Test Automation: Develop and maintain basic automation scripts (preferably in Python) to streamline testing processes for AI functionalities Data Quality Assurance: Evaluate and validate the quality of input and output data for AI models, reporting and documenting issues as needed Bug Reporting & Documentation: Identify, document, and communicate bugs or issues discovered during testing. Maintain clear testing documentation and reports. Collaboration: Work closely with knowledge graph engineers, data scientists, and product managers to understand requirements and deliver robust solutions. Requirements Requirements: Education: Bachelor’s degree in Computer Science, Information Technology, or related field. Experience: ideally experience in software/AI testing, data engineering, or a similar technical role. Technical Skills: Proficient in Python (must have) Experience with test case design, execution, and bug reporting Exposure to knowledge graph technologies (e.g., Neo4j, RDF, SPARQL) and data ingestion/ETL processes Analytical & Problem-Solving Skills: Strong attention to detail, ability to analyze data and systems, and troubleshoot issues Communication: Clear verbal and written communication skills for documentation and collaboration. Preferred Qualifications: Experience with graph query languages (e.g., Cypher, SPARQL) Exposure to cloud platforms (AWS, Azure, GCP) and CI/CD workflows Familiarity with data quality and governance practices. Show more Show less
Posted 4 weeks ago
5 - 9 years
7 - 11 Lacs
Kochi, Coimbatore, Thiruvananthapuram
Work from Office
Job Title - Senior Data Engineer (Graph DB specialist)+ Specialist + Global Song Management Level :9,Specialist Location:Kochi, Coimbatore Must have skills: Data Modeling Techniques and Methodologies Good to have skills:Proficiency in Python and PySpark programming. Job Summary :We are seeking a highly skilled Data Engineer with expertise in graph databases to join our dynamic team. The ideal candidate will have a strong background in data engineering, graph querying languages, and data modeling, with a keen interest in leveraging cutting-edge technologies like vector databases and LLMs to drive functional objectives. Your responsibilities will include: Design, implement, and maintain ETL pipelines to prepare data for graph-based structures. Develop and optimize graph database solutions using querying languages such as Cypher, SPARQL, or GQL. Neo4J DB experience is preferred. Build and maintain ontologies and knowledge graphs, ensuring efficient and scalable data modeling. Integrate vector databases and implement similarity search techniques, with a focus on Retrieval-Augmented Generation (RAG) methodologies and GraphRAG. Collaborate with data scientists and engineers to operationalize machine learning models and integrate with graph databases. Work with Large Language Models (LLMs) to achieve functional and business objectives. Ensure data quality, integrity, and security while delivering robust and scalable solutions. Communicate effectively with stakeholders to understand business requirements and deliver solutions that meet objectives. Roles & Responsibilities: Experience:At least 5 years of hands-on experience in data engineering. With 2 years of experience working with Graph DB. Programming: Querying:Advanced knowledge of Cypher, SPARQL, or GQL querying languages. ETL Processes:Expertise in designing and optimizing ETL processes for graph structures. Data Modeling:Strong skills in creating ontologies and knowledge graphs.Presenting data for Graph RAG based solutions Vector Databases:Understanding of similarity search techniques and RAG implementations. LLMs:Experience working with Large Language Models for functional objectives. Communication:Excellent verbal and written communication skills. Cloud Platforms:Experience with Azure analytics platforms, including Function Apps, Logic Apps, and Azure Data Lake Storage (ADLS). Graph Analytics:Familiarity with graph algorithms and analytics. Agile Methodology:Hands-on experience working in Agile teams and processes. Machine Learning:Understanding of machine learning models and their implementation. Professional & Technical Skills: Additional Information: (do not remove the hyperlink) Qualifications Experience: Minimum 5-10 year(s) of experience is required Educational Qualification: Any graduation / BE / B Tech
Posted 1 month ago
5 - 7 years
8 - 14 Lacs
Hyderabad
Work from Office
Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.
Posted 1 month ago
2 - 6 years
5 - 9 Lacs
Bengaluru
Work from Office
Hello Visionary! We know that the only way a business thrive is if our people are growing. That"™s why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you? We are looking for Semantic Web ETL Developer We are looking for a highly experienced and hands-on candidate who has expertise in web-development using Django framework who is well versed with Algorithmic concepts and Data Structure concepts along with experience in ETL Projects. The candidate should be well versed with Semantic Data Development using libraries like rdflib or pySHACL. Data Querying language such as SPARQL will be an add-on. Knowledge in AWS is not a must, but any cloud technologies is required. You"™ll make a difference by International experience with global projects and collaboration with intercultural team is preferred 5 - 10 years"™ experience on developing software solutions with Python language. Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POC"™s Manage end-to-end development of web applications and knowledge graph projects, ensuring best practices and high code quality. Provide technical guidance and mentorship to junior developers, fostering their growth and development. Design scalable and efficient architectures for web applications, knowledge graphs, and database models. Enforce code standards and perform code reviews, ensuring alignment with best practices like PEP8, DRY, and SOLID principles. Collaborate with frontend developers, DevOps teams, and database administrators to deliver cohesive solutions. Strong and Expert-like proficiency in Python web frameworks Django, Flask(optional), FAST API, Knowledge Graph Libraries. Experience in designing and developing complex RESTful APIs and microservices architectures. Strong understanding of security best practices in web applications (e.g., authentication, authorization, and data protection). Extensive experience in building and querying knowledge graphs using Python libraries like RDFLib, Py2neo, or similar. Proficiency in SPARQL for advanced graph data querying. Experience with graph databases like Neo4j, GraphDB, or Blazegraph.or AWS Neptune Experience in expert functions like Software Development / Architecture, Software Testing (Unit Testing, Integration Testing) Excellent in DevOps practices, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Excellent in Cloud technologies and architecture. Should have exposure on S3, EKS, ECR, AWS Neptune Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. You"™ll win us over by Leadership Qualities Visionary Leadership Ability to lead the team towards long-term technical goals while managing immediate priorities. Strong Communication Excellent interpersonal skills to work effectively with both technical and non-technical stakeholders. Mentorship & Coaching Foster a culture of continuous learning, skill development, and collaboration within the team. Conflict Resolution Ability to manage team conflicts and provide constructive feedback to improve team dynamics. We are looking forward to receiving your online application. Please ensure you complete all areas of the application form to the best of your ability as we will use the data to review your suitability for the role. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at www.siemens.com/careers
Posted 1 month ago
3 - 5 years
6 - 10 Lacs
Gurugram
Work from Office
Position Summary: A Data Engineer designs and maintains scalable data pipelines and storage systems, with a focus on integrating and processing knowledge graph data for semantic insights. They enable efficient data flow, ensure data quality, and support analytics and machine learning by leveraging advanced graph-based technologies. How You"™ll Make an Impact (responsibilities of role) Build and optimize ETL/ELT pipelines for knowledge graphs and other data sources. Design and manage graph databases (e.g., Neo4j, AWS Neptune, ArangoDB). Develop semantic data models using RDF, OWL, and SPARQL. Integrate structured, semi-structured, and unstructured data into knowledge graphs. Ensure data quality, security, and compliance with governance standards. Collaborate with data scientists and architects to support graph-based analytics. What You Bring (required qualifications and skills) Bachelor"™s/master"™s in computer science, Data Science, or related fields. Experience3+ years of experience in data engineering, with knowledge graph expertise. Proficiency in Python, SQL, and graph query languages (SPARQL, Cypher). Experience with graph databases and frameworks (Neo4j, GraphQL, RDF). Knowledge of cloud platforms (AWS, Azure). Strong problem-solving and data modeling skills. Excellent communication skills, with the ability to convey complex concepts to non-technical stakeholders. The ability to work collaboratively in a dynamic team environment across the globe.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2