Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
hyderabad, telangana, india
On-site
Opentext - The Information Company OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. ENABLING THE DIGITAL WORLD OpenText enables the digital world as the global leader in Enterprise Information Management, both on premises and in the cloud. We are committed to being the best place to work for more than 15,000 employees in over 120 locations. What we do, we do well. What we create, we do purposefully to impact the world. If you believe in this and are passionate about directing people towards a better way to work, then let OpenText enable your digital world career. About The Role As a Principal Software Architect on the Commercial Architecture team at OpenText, you will lead the design and governance of modern application and cloud architectures across a diverse product landscape. This is a strategic and hands-on role focused on driving architectural consistency, scalability, and modernization across OpenText’s commercial software portfolio. You will work cross-functionally with product engineering, operations, product management, and security to guide and influence the technical direction of cloud-native services, APIs, identity, and next-generation application platforms. This role is ideal for someone who combines deep technical expertise with strategic thinking and a passion for designing large-scale, modern, secure systems. Strong data engineering and AI experience are required. Key Responsibilities Develop unified data strategies eliminating fragmented application landscapes and silos and integrating structured data and unstructured data Define data product specifications and standards that treat information as engineered assets with standardized service contracts and governance controls Design AI-driven knowledge graph construction from vast repositories of unstructured content Architect AI-native data pipelines integrating with OpenText's Aviator AI platform for contextual search and agentic workflows Design accuracy enhancement frameworks to reduce hallucinations and provide verifiable results Drive strategic customer pilots in content-heavy industries (legal, healthcare, financial services) with clear ROI demonstration Provide architectural governance and consultation across a portfolio of products, ensuring alignment with enterprise standards. Shape technical vision, translate architectural concepts into business value, and present to C-level executives Required Qualifications 10+ years enterprise data architecture experience designing large-scale platforms for Fortune 500 companies Deep expertise in modern data architecture patterns (data pipeline, data mart and data fabric) and data processing frameworks (Spark, Kafka) Advanced knowledge of knowledge graphs and semantic technologies (RDF, SPARQL, Neo4j) Strong AI/ML background, particularly LLMs, and automated knowledge extraction from unstructured content Data governance, compliance frameworks (GDPR, HIPAA, SOX), and enterprise security architectures Extensive multi-cloud architecture experience (AWS, Azure, GCP) and hybrid cloud strategies Experience working in or with large-scale SaaS platforms, including deployment, scale, and operational considerations Leadership and Communication Proven ability to define and drive a compelling technical vision, leading large-scale architectural transformational initiatives Demonstrated success in leading cross-functional teams within complex B2B environments Exceptional communication and interpersonal skills, with a strong ability to influence and align diverse technical and business stakeholders Advanced documentation and visualization capabilities, including authoring architecture blueprints, reusable patterns, and reference artifacts that promote consistency and scalability across teams Extensive experience in customer-facing engagements, providing expert technical guidance, solutioning, and support in high-stakes scenarios OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 1 day ago
9.0 years
0 Lacs
hyderābād
On-site
What We’re Looking For: Must have 9+ years of application development experience in the following: Java 17 Spring, Spring Boot Rest API, Multi-Threading SQL Server database Experience using AI coding assistants (e.g. GitHub Copilot, Code Whisperer) to accelerate development while maintaining high code quality. Proficiency in developing and implementing MCP servers to enable seamless integration between AI assistants and external systems, tools, and data sources. Strong familiarity with agentic AI principles, including autonomous decision-making systems, adaptive learning mechanisms, and intelligent agent architectures that can operate independently while learning from interactions. Advanced skills in prompt engineering techniques and model fine-tuning to optimize AI performance for specific use cases and domain requirements. Hands-on experience with leading agentic frameworks such as LangChain, Semantic Kernel, CrewAI, and similar platforms for building sophisticated AI agent systems and workflows. Understanding of RDF (Resource Description Framework), SPARQL query language, and SHACL (Shapes Constraint Language) for data validation and modeling. Experience with semantic graph databases, particularly Stardog, and domain-specific ontologies including FIBO/CDM frameworks. Strong Knowledge of Java distributed computing technologies, Spring, REST, and modern Java web technologies The right candidate would also demonstrate solid OO programming including Object Oriented Design Patterns and have strong opinions on best programming practices Well versed with continuous integration and continuous delivery tools and techniques Improve and maintain continuous deployment methodologies including working with SQA teams to enforce unit, regression, and integration testing. Work closely with analysts to gather business requirements, develop and deliver highly scalable and numerate financial applications Validate developed solutions to ensure that requirements met, and the results meet the business needs Personal competencies Strong analytical, investigative, and problem-solving skills. Commitment to producing quality work in a timely manner. Must be confident, articulate and fast learner. Willing to progress in an exciting, fast paced environment. Self-starter with a natural curiosity to learn and develop capabilities. Strong oral and written communication skills Interpersonal skills and ability to work cooperatively with cross functional teams. Strong team player who is comfortable working on a variety of projects using diverse technologies
Posted 3 days ago
6.0 - 9.0 years
0 Lacs
hyderābād
On-site
Must have 6-9 years of application development experience in the following: Java 17 Spring, Spring Boot Rest API, Multi-Threading SQL Server database Experience using AI coding assistants (e.g. GitHub Copilot, Code Whisperer) to accelerate development while maintaining high code quality. Proficiency in developing and implementing MCP servers to enable seamless integration between AI assistants and external systems, tools, and data sources. Strong familiarity with agentic AI principles, including autonomous decision-making systems, adaptive learning mechanisms, and intelligent agent architectures that can operate independently while learning from interactions. Advanced skills in prompt engineering techniques and model fine-tuning to optimize AI performance for specific use cases and domain requirements. Understanding of RDF (Resource Description Framework), SPARQL query language, and SHACL (Shapes Constraint Language) for data validation and modeling. Experience with semantic graph databases, particularly Stardog, and domain-specific ontologies including FIBO/CDM frameworks. Strong Knowledge of Java distributed computing technologies, Spring, REST, and modern Java web technologies The right candidate would also demonstrate solid OO programming including Object Oriented Design Patterns and have strong opinions on best programming practices Well versed with continuous integration and continuous delivery tools and techniques Improve and maintain continuous deployment methodologies including working with SQA teams to enforce unit, regression, and integration testing. Work closely with analysts to gather business requirements, develop and deliver highly scalable and numerate financial applications Validate developed solutions to ensure that requirements met, and the results meet the business needs Personal competencies Strong analytical, investigative, and problem-solving skills. Commitment to producing quality work in a timely manner. Must be confident, articulate and fast learner. Willing to progress in an exciting, fast paced environment. Self-starter with a natural curiosity to learn and develop capabilities. Strong oral and written communication skills Interpersonal skills and ability to work cooperatively with cross functional teams. Strong team player who is comfortable working on a variety of projects using diverse technologies
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana, india
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data. A knowledge of Medallion Architecture will be an added advantage. Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases. Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems. Implement and maintain data ontologies to support semantic interoperability and consistent data classification. Collaborate with architects to integrate ontology models with metadata repositories and business glossaries. Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping. Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Apply data engineering best practices including CI/CD, version control, and code modularity. Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience Must have Skills: Bachelor’s or master’s degree in computer science, Data Science, or a related field. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Strong programming skills in Python, PySpark, and SQL. Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic). Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL). Familiarity with SPARQL and/or Cypher for querying semantic and property graphs. Experience working with cloud data services (Azure, AWS, or GCP). Strong understanding of data modeling, entity relationships, and semantic interoperability. Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Graph-DB related certifications Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
4.0 - 8.0 years
8 - 17 Lacs
hyderabad
Work from Office
Job Title: Python Backend Developer Location: Hyderabad Employment Type: Full-time Job Summary We are seeking an experienced Python Backend Developer to design, develop, and maintain scalable backend services. The role involves building high-performance data pipelines, deploying applications on cloud infrastructure (AWS/EKS), integrating AI/Generative AI models, and working with GraphDB for advanced data querying and analytics. Key Responsibilities * Backend Development: Design, develop, and maintain Python-based backend services for real-time and batch data processing. * Cloud Infrastructure: Deploy and manage containerized applications on AWS ( primarily EKS ), ensuring scalability, high availability, and fault tolerance. * Data Processing: Build efficient and scalable workflows for large datasets, covering both batch and real-time processing. * AI/Generative AI Integration : Collaborate with data scientists and AI engineers to integrate AI/Generative AI models into backend pipelines. * GraphDB Expertise: Implement and query complex graph databases (RDF/SPARQL) to support semantic querying and advanced analytics. * Performance Optimization: Monitor, profile, and optimize backend systems for low-latency, high-throughput performance. * Code Quality & Documentation: Write clean, maintainable, and well-documented code; participate in code reviews and unit testing. * Collaboration: Work closely with front-end developers, data scientists, and cross-functional teams to deliver end-to-end solutions. Required Qualifications & Experience * Bachelors or Masters degree in Computer Science, Engineering, or related field. * 4+ years of experience in backend development with Python. * Strong hands-on experience with AWS (EKS, containerized apps). * Proven expertise in building and optimizing data pipelines (batch + real-time). * Knowledge of AI/Generative AI model integration. * Experience with GraphDB (RDF/SPARQL ) preferred. * Strong problem-solving, debugging, and optimization skills. Preferred Skills & Attributes * Familiarity with microservices architecture and CI/CD pipelines. * Excellent communication and collaboration skills. * Ability to thrive in a fast-paced, innovative environment.
Posted 5 days ago
5.0 - 9.0 years
7 - 8 Lacs
hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and optimize data pipelines/workflows using Databricks (Spark, Delta Lake) for ingestion, transformation, and processing of large-scale data. A knowledge of Medallion Architecture will be an added advantage. Build and manage graph database solutions (e.g., Neo4j, Stardog, Amazon Neptune) to support knowledge graphs, relationship modeling, and inference use cases. Leverage SPARQL, Cypher, or Gremlin to query and analyze data within graph ecosystems. Implement and maintain data ontologies to support semantic interoperability and consistent data classification. Collaborate with architects to integrate ontology models with metadata repositories and business glossaries. Support data governance and metadata management through integration of lineage, quality rules, and ontology mapping. Contribute to data cataloging and knowledge graph implementations using RDF, OWL, or similar technologies. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Apply data engineering best practices including CI/CD, version control, and code modularity. Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Master’s/Bachelor’s degree and 5 to 9 years of Computer Science, IT or related field experience Must have Skills: Bachelor’s or master’s degree in computer science, Data Science, or a related field. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), python for workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Strong programming skills in Python , PySpark , and SQL . Solid experience designing and querying Graph Databases (e.g., Allegrograph, MarkLogic). Proficiency with ontology languages and tools (e.g., TopBraid, RDF, OWL, Protégé, SHACL). Familiarity with SPARQL and/or Cypher for querying semantic and property graphs. Experience working with cloud data services (Azure, AWS, or GCP). Strong understanding of data modeling , entity relationships , and semantic interoperability . Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, cloud data platforms Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Graph-DB related certifications Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills : Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 5 days ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You are being sought after for the position of Snowflake Analytics Support Engineer (AnzoGraph, Matillion) with our esteemed client in Noida. This is a full-time role requiring 8 to 12 years of experience. As a Data & Analytics Support Engineer, you will play a crucial role in supporting and optimizing data pipelines, graph analytics, and cloud-based data platforms. Your responsibilities will include providing technical support for AnzoGraph-based solutions, collaborating with data engineering teams on designing and troubleshooting data pipelines, monitoring and optimizing performance of graph queries, ensuring seamless integration of data platforms, and supporting data workflows across AWS services. Additionally, you will be involved in incident management, documentation, and knowledge sharing. To excel in this role, you must possess proven experience with AnzoGraph DB or similar graph database technologies, proficiency in Snowflake data warehousing and Matillion ETL, hands-on experience with AWS services, and proficiency in SQL, SPARQL, and graph query languages. Strong problem-solving skills, communication abilities, and the capacity to work both independently and within a team are essential. Preferred qualifications include experience in data support roles within enterprise-scale environments and certifications in AWS, Snowflake, or related technologies. If you are excited about this opportunity and possess the required skills and qualifications, please get in touch with Swapnil Thakur at swapnil.t@ipeopleinfosystems.com to express your interest. Thank you for considering this opportunity. Best regards, Swapnil Thakur Recruitment & Delivery Lead iPeople Infosystems LLC,
Posted 6 days ago
6.0 - 8.0 years
7 - 12 Lacs
bengaluru
Work from Office
We are looking for Senior Python Developer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environment? What is the user story based on? Implementation means trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. YOULL MAKE A DIFFERENCE BY: - Implementing innovative Products and Solution Development processes and tools by applying your expertise in the field of responsibility. JOB REQUIREMENTS/ SKILLS: International experience with global projects and collaboration with intercultural team is preferred 6 - 8 years experience on developing software solutions with Python language. Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POCs Manage end-to-end development of web applications and knowledge graph projects, ensuring best practices and high code quality. Provide technical guidance and mentorship to junior developers, fostering their growth and development. Design scalable and efficient architectures for web applications, knowledge graphs, and database models. Carry out code standards and perform code reviews, ensuring alignment with standard methodologies like PEP8, DRY, and SOLID principles. Collaborate with frontend developers, DevOps teams, and database administrators to deliver cohesive solutions. Strong and Expert-like proficiency in Python web frameworks Django, Flask, FAST API, Knowledge Graph Libraries. Experience in designing and developing complex RESTful APIs and microservices architectures. Strong understanding of security standard processes in web applications (e.g., authentication, authorization, and data protection). Extensive experience in building and querying knowledge graphs using Python libraries like RDFLib, Py2neo, or similar. Proficiency in SPARQL for advanced graph data querying. Experience with graph databases like Neo4j, GraphDB, or Blazegraph.or AWS Neptune Experience in expert functions like Software Development / Architecture, Software Testing (Unit Testing, Integration Testing) Excellent in DevOps practices, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Excellent in Cloud technologies and architecture. Should have exposure on S3, EKS, ECR, AWS Neptune Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. LEADERSHIP QUALITIES Visionary Leadership: Ability to lead the team towards long-term technical goals while managing immediate priorities. Strong Communication: Good interpersonal skills to work effectively with both technical and non-technical stakeholders. Mentorship & Coaching: Foster a culture of continuous learning, skill development, and collaboration within the team. Conflict Resolution: Ability to manage team conflicts and provide constructive feedback to improve team dynamics. This role is in Bangalore, where youll get the chance to work with teams impacting entire cities, countries and the craft of things to come. Were Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow s reality. Find out more about the Digital world of Siemens here: www.siemens.com/careers/digitalminds (http://www.siemens.com/careers/digitalminds)
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You will be responsible for designing, developing, and maintaining scalable data pipelines using Python to ensure efficient data integration, transformation, and performance optimization. Utilizing graph technologies, you will model complex data relationships, manage graph databases, and implement SPARQL queries for data retrieval. Your role will involve implementing and managing automated data workflows through orchestration tools like Apache Airflow and Prefect to guarantee reliability and fault tolerance. Ensuring data accuracy and consistency through quality checks will be crucial, along with enforcing data governance policies for data integrity. Collaboration with data scientists, analysts, and stakeholders to deliver data solutions is essential, requiring clear communication of technical concepts to non-technical audiences. Additionally, you will leverage the Prophecy platform to design and manage data workflows, ensuring smooth integration and optimization for scalability. Key Skills Required: - Proficiency in graph databases and technologies such as Neo4j and SPARQL. - Demonstrated experience in Python and data pipeline development. - Hands-on familiarity with data orchestration tools like Apache Airflow and Prefect. - Strong grasp of data quality, governance, and integrity best practices. - Excellent communication and collaboration abilities. - Experience with the Prophecy platform is advantageous. - A commitment to continuous learning to keep pace with evolving data engineering trends and tools.,
Posted 1 week ago
2.0 - 5.0 years
9 - 13 Lacs
bengaluru
Work from Office
About The Role Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives.Key Responsibilities:- Knowledge Modeling & Ontology Design- Knowledge in any of the domain-specific standards - Develop ontologies and taxonomies using any of the open-source knowledge graph tools and build sophisticated knowledge graphs for structured data representation that aligns with domain standards.- Build and refine knowledge graphs for structured data representation, aligning with domain standards - Design scalable architectures for linked data and semantic models, integrating data from multiple sources.- Design metadata schemas and common data vocabulary, leveraging RDF/OWL to enhance data accessibility.- Use tools like Protg or TopBraid Composer or any other tool to define and manage ontology structures.- Develop data models that support semantic search, data extraction, and AI-driven recommendations. Technical Experience:Must Have Skills: - Minimum of 2 years in ontology development, knowledge modeling, and graph database management.- Proficiency in RDF, OWL, SKOS or SHACL - Proficiency in Protg or TopBraid Composer or any other modelling tool- Familiarity with SPARQL or other graph query languages- Knowledge in any of the knowledge graph platforms like Neo4j, Dgraph, StarDog, TopBriad ,ArangoDB, and Blazegraph.- Ability to incorporate domain knowledge into semantic models for actionable business insights.Good to Have Skills: - Collaborate with AI/ML teams to implement natural language processing and contextual data retrieval using knowledge graphs.- Enhance data discovery and search capabilities through graph-based search relevancy and knowledge representation.- Experience with knowledge graph visualization tools like Graphistry and Gephi.- Experience in programming skills in Python or Java to implement custom graph applications and integrations.- Integrate graph database solutions for efficient data querying and management.- Familiarity with open-source graph databases and their applications in real-time analytics.Professional Experience:- Good communication skills for conveying complex concepts to technical and non-technical stakeholders.- Ability to work both independently and within a team setting, showing leadership in best practices.- Proactive, innovative, and detail-oriented, with a strong focus on emerging technologies. Educational Qualification:- Bachelors or masters degree in information science, Data Science, Knowledge Management, or a related field.- Certifications in ontology management or data standards are highly valued. Qualification 15 years full time education
Posted 1 week ago
8.0 - 13.0 years
12 - 16 Lacs
hyderabad
Work from Office
Job Description Summary The person in this role will be the technical team lead and the point of contact between the PM, Architect and People leader. This person will work closely with the Product Owner to break down features into detailed, technical, work chunks that will be implemented by the team members. This person will oversee the detailed technical designs of the individual features. This person will need to fully understand the Modeling ecosystem and where it fits in the GridOS context Job Description Roles and Responsibilities Serve as technical lead for the Modeling Development team: Single point of contact about technical development aspects for the Architect, PO, Scrum Master and Team Manager, owns onboarding and ramp-up processes for the team members, owns efficiency and quality of the development process. Responsible for the quality of the development in terms of software performances, code quality, test automation, code coverage, CI/CD and documentation. Oversee the detailed technical designs of the individual features. High level estimates of the different features of the products. Owns technical deliverables during the entire lifecycle of the products. Keep the products development lifecycle on track in terms of budget, time and quality. Keep track of developments happening within GridOS ecosystem and build bridges with other engineering and services teams. Interact with Services teams, and partner integrator teams, to provide processes to ensure best use of GridOS Modeling products and services. Effectively communicate both verbally and in writing with peers and team members as an inclusive team member. Serves as a technical leader or mentor on complex, integrated implementations within the GridOS Modeling product teams. Work in a self-directed fashion to proactively identify system problems, failures, and areas for improvement. Track issue resolution and document solutions implemented and create troubleshooting guides. Peer review of Pull Requests. Education Qualification For roles outside USA: Bachelor's Degree in Computer Science or STEM Majors (Science, Technology, Engineering and Math) with significant experience. For roles in USA: Bachelor's Degree in Computer Science or STEM Majors (Science, Technology, Engineering and Math) Desired Characteristics Technical Expertise: Strong understanding of OOP concepts Strong experience with Kubernetes and microservices architectures Containers technology Strong expertise in JAVA and Python, Maven and Springboot framework REST API (OpenAPI) and event design GraphQL schemas & services design Graph technologies and frameworks: Apache Jena Neo4J GraphDB Experience with RDF and SPARQL Unit and integration tests design CI/CD pipelines designs JSON & YAML Schemas Events driven architecture Data streaming technologies such as Apache Kafka Microservice observability and metrics Integration skills Autonomous and able to work asynchronously (due to time zone difference) Software & API documentation Good to have Data engineering and data architecture expertise Apache Camel & Apache Arrow Experience in Grid or Energy software business (AEMS ADMS Energy Markets SCADA GIS) Business Acumen: Adept at navigating the organizational matrix; understanding people's roles, can foresee obstacles, identify workarounds, leverage resources and rally teammates. Understand how internal and/or external business model works and facilitate active customer engagement Able to articulate the value of what is most important to the business/customer to achieve outcomes Able to produce functional area information in sufficient detail for cross-functional teams to utilize, using presentation and storytelling concepts. Possess extensive knowledge of full solution catalog within a business unit and proficiency in discussing each area at an advanced level. Six Sigma Green Belt Certification or equivalent quality certification. Leadership: Demonstrated working knowledge of internal organization Foresee obstacles, identify workarounds, leverage resources, rally teammates. Demonstrated ability to work with and/or lead blended teams, including 3rd party partners and customer personnel. Demonstrated Change ManagementAcceleration capabilities Strong interpersonal skills, including creativity and curiosity with ability to effectively communicate and influence across all organizational levels Proven analytical and problem resolution skills Ability to influence and build consensus with other Information Technology (IT) teams and leadership.
Posted 1 week ago
2.0 - 5.0 years
9 - 13 Lacs
bengaluru
Work from Office
Role Description We are seeking a Data Science Engineer to contribute to the development of intelligent, autonomous AI systems The ideal candidate will have a strong background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves deploying AI solutions that leverage Retrieval-Augmented Generation (RAG) Multi-agent frameworks Hybrid search techniques to enhance enterprise applications. Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG Pipelines: Integrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER Models: Train OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge Graphs: Construct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 4+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce.
Posted 1 week ago
7.0 - 12.0 years
32 - 37 Lacs
bengaluru
Work from Office
Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Your key responsibilities Design & Develop Agentic AI Applications:Utilise frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG Pipelines: Integrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language Models:Customise LLMs (e.g., Gemini, chatgpt, Llama) and SLMs (e.g., Spacy, NLTK) using domain-specific data to improve performance and relevance in specialised applications. NER Models: Train OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge Graphs: Construct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-Functionally:Work with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimise AI Workflows:Employ MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring. Your skills and experience 13+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs (Spacy), and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j. Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or e-commerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for cross-functional collaboration. Ability to work independently and manage multiple projects simultaneously.
Posted 1 week ago
4.0 - 5.0 years
5 - 6 Lacs
gurgaon
On-site
About Our Team Our global team supports products education electronic health records that introduce students to digital charting and prepare them to document care in today’s modern clinical environment. We have a very stable product that we've worked to get to and strive to maintain. Our team values trust, respect, collaboration, agility, and quality. The Consumption Domain is a newly established domain, offering an exciting opportunity to play a crucial role in structuring and shaping its foundation. Our team is responsible for ensuring seamless data processing, validation, and operational efficiency, while continuously improving workflow optimization and incident management. We work closely with various stakeholders to drive accuracy, speed, and reliability in delivering high-quality data. With a problem-solving mindset and a data-driven approach, we aim to build scalable solutions that enhance business processes and improve overall user experience. About the Role Elsevier is looking for a Senior Analyst to join the Consumption Domain team, where you will play a crucial role in analyzing and interpreting user engagement and content consumption trends. The ideal candidate will possess strong technical expertise in data analytics, Databricks, ETL processes, and cloud storage, coupled with a passion for using data to drive meaningful business decisions. Responsibilities Analyze and interpret large datasets to provide actionable business insights. Leverage Databricks, working with RDDs, Data Frames, and Datasets to optimize data workflows. Design and implement ETL processes, job automation, and data optimization strategies. Work with structured, unstructured, and semi-structured data types, including JSON, XML, and RDF. Manage various file formats (Parquet, Delta files, ZIP files) and handle data storage within DBFS, FileStore, and cloud storage solutions (AWS S3, Google Cloud Storage, etc.). Write efficient SQL, Python, or Scala scripts to extract and manipulate data. Develop insightful dashboards using Tableau or Power BI to visualize key trends and performance metrics. Collaborate with cross-functional teams to drive data-backed decision-making. Maintain best practices in data governance, utilizing platforms such as Snowflake and Collibra. Participate in Agile development methodologies, using tools like Jira, Confluence. Ensure proper version control using GitHub. Requirements Bachelor’s degree in Computer Science, Data Science, or Statistics. Minimum of 4-5 years of experience in data analytics, preferably in a publishing environment. Proven expertise in Databricks, including knowledge of RDDs, DataFrames, and Datasets. Strong understanding of ETL processes, data optimization, job automation, and Delta Lake. Proficiency in handling structured, unstructured, and semi-structured data. Experience with various file types, including Parquet, Delta, and ZIP files. Familiarity with DBFS, FileStore, and cloud storage solutions such as AWS S3, Google Cloud Storage. Strong programming skills in SQL, Python, or Scala. Experience creating dashboards using Tableau or Power BI is a plus. Knowledge of Snowflake, Collibra, JSON-LD, SHACL, SPARQL is an advantage. Familiarity with Agile development methodologies, including Jira and Confluence. Experience with GitLab for version control is beneficial. Skills and Competencies Ability to handle large datasets efficiently. Strong analytical and problem-solving skills. Passion for data-driven decision-making and solving business challenges. Eagerness to learn new technologies and continuously improve processes. Effective communication and data storytelling abilities. Experience collaborating with cross-functional teams. Project management experience in software systems is a plus. Work in a way that works for you We promote a healthy work/life balance across the organization. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance, and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working for You We understand that your well-being and happiness are essential to a successful career. Here are some benefits we offer: Comprehensive Health Insurance: Covers you, your immediate family, and parents. Enhanced Health Insurance Options: Competitive rates negotiated by the company. Group Life Insurance: Ensuring financial security for your loved ones. Group Accident Insurance: Extra protection for accidental death and permanent disablement. Flexible Working Arrangement: Achieve a harmonious work-life balance. Employee Assistance Program: Access support for personal and work-related challenges. Medical Screening: Your well-being is a top priority. Modern Family Benefits: Maternity, paternity, and adoption support. Long-Service Awards: Recognizing dedication and commitment. New Baby Gift: Celebrating the joy of parenthood. Subsidized Meals in Chennai: Enjoy delicious meals at discounted rates. Various Paid Time Off: Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. Free Transport pick up and drop from the home -office - home (applies in Chennai)" About the Business A global leader in information and analytics, we help researchers and healthcare professionals advance science and improve health outcomes for the benefit of society. Building on our publishing heritage, we combine quality information and vast data sets with analytics to support visionary science and research, health education and interactive learning, as well as exceptional healthcare and clinical practice. What you do every day will help advance science and healthcare to advance human progress. - We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: EEO Know Your Rights .
Posted 1 week ago
4.0 - 5.0 years
2 - 6 Lacs
gurgaon
On-site
About Our Team Our global team supports products education electronic health records that introduce students to digital charting and prepare them to document care in today’s modern clinical environment. We have a very stable product that we've worked to get to and strive to maintain. Our team values trust, respect, collaboration, agility, and quality. The Consumption Domain is a newly established domain, offering an exciting opportunity to play a crucial role in structuring and shaping its foundation. Our team is responsible for ensuring seamless data processing, validation, and operational efficiency, while continuously improving workflow optimization and incident management. We work closely with various stakeholders to drive accuracy, speed, and reliability in delivering high-quality data. With a problem-solving mindset and a data-driven approach, we aim to build scalable solutions that enhance business processes and improve overall user experience. About the Role Elsevier is looking for a Senior Analyst to join the Consumption Domain team, where you will play a crucial role in analyzing and interpreting user engagement and content consumption trends. The ideal candidate will possess strong technical expertise in data analytics, Databricks, ETL processes, and cloud storage, coupled with a passion for using data to drive meaningful business decisions. Responsibilities Analyze and interpret large datasets to provide actionable business insights. Leverage Databricks, working with RDDs, Data Frames, and Datasets to optimize data workflows. Design and implement ETL processes, job automation, and data optimization strategies. Work with structured, unstructured, and semi-structured data types, including JSON, XML, and RDF. Manage various file formats (Parquet, Delta files, ZIP files) and handle data storage within DBFS, FileStore, and cloud storage solutions (AWS S3, Google Cloud Storage, etc.). Write efficient SQL, Python, or Scala scripts to extract and manipulate data. Develop insightful dashboards using Tableau or Power BI to visualize key trends and performance metrics. Collaborate with cross-functional teams to drive data-backed decision-making. Maintain best practices in data governance, utilizing platforms such as Snowflake and Collibra. Participate in Agile development methodologies, using tools like Jira, Confluence. Ensure proper version control using GitHub. Requirements Bachelor’s degree in Computer Science, Data Science, or Statistics. Minimum of 4-5 years of experience in data analytics, preferably in a publishing environment. Proven expertise in Databricks, including knowledge of RDDs, DataFrames, and Datasets. Strong understanding of ETL processes, data optimization, job automation, and Delta Lake. Proficiency in handling structured, unstructured, and semi-structured data. Experience with various file types, including Parquet, Delta, and ZIP files. Familiarity with DBFS, FileStore, and cloud storage solutions such as AWS S3, Google Cloud Storage. Strong programming skills in SQL, Python, or Scala. Experience creating dashboards using Tableau or Power BI is a plus. Knowledge of Snowflake, Collibra, JSON-LD, SHACL, SPARQL is an advantage. Familiarity with Agile development methodologies, including Jira and Confluence. Experience with GitLab for version control is beneficial. Skills and Competencies Ability to handle large datasets efficiently. Strong analytical and problem-solving skills. Passion for data-driven decision-making and solving business challenges. Eagerness to learn new technologies and continuously improve processes. Effective communication and data storytelling abilities. Experience collaborating with cross-functional teams. Project management experience in software systems is a plus. Work in a way that works for you We promote a healthy work/life balance across the organization. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance, and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working for You We understand that your well-being and happiness are essential to a successful career. Here are some benefits we offer: Comprehensive Health Insurance: Covers you, your immediate family, and parents. Enhanced Health Insurance Options: Competitive rates negotiated by the company. Group Life Insurance: Ensuring financial security for your loved ones. Group Accident Insurance: Extra protection for accidental death and permanent disablement. Flexible Working Arrangement: Achieve a harmonious work-life balance. Employee Assistance Program: Access support for personal and work-related challenges. Medical Screening: Your well-being is a top priority. Modern Family Benefits: Maternity, paternity, and adoption support. Long-Service Awards: Recognizing dedication and commitment. New Baby Gift: Celebrating the joy of parenthood. Subsidized Meals in Chennai: Enjoy delicious meals at discounted rates. Various Paid Time Off: Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. Free Transport pick up and drop from the home -office - home (applies in Chennai)" About the Business A global leader in information and analytics, we help researchers and healthcare professionals advance science and improve health outcomes for the benefit of society. Building on our publishing heritage, we combine quality information and vast data sets with analytics to support visionary science and research, health education and interactive learning, as well as exceptional healthcare and clinical practice. What you do every day will help advance science and healthcare to advance human progress. - We are committed to providing a fair and accessible hiring process. If you have a disability or other need that requires accommodation or adjustment, please let us know by completing our Applicant Request Support Form or please contact 1-855-833-5120. Criminals may pose as recruiters asking for money or personal information. We never request money or banking details from job applicants. Learn more about spotting and avoiding scams here . Please read our Candidate Privacy Policy . We are an equal opportunity employer: qualified applicants are considered for and treated during employment without regard to race, color, creed, religion, sex, national origin, citizenship status, disability status, protected veteran status, age, marital status, sexual orientation, gender identity, genetic information, or any other characteristic protected by law. USA Job Seekers: EEO Know Your Rights .
Posted 1 week ago
4.0 - 5.0 years
0 Lacs
gurugram, haryana, india
On-site
About Our Team Our global team supports products education electronic health records that introduce students to digital charting and prepare them to document care in today’s modern clinical environment. We have a very stable product that we've worked to get to and strive to maintain. Our team values trust, respect, collaboration, agility, and quality. The Consumption Domain is a newly established domain, offering an exciting opportunity to play a crucial role in structuring and shaping its foundation . Our team is responsible for ensuring seamless data processing, validation, and operational efficiency , while continuously improving workflow optimization and incident management . We work closely with various stakeholders to drive accuracy, speed, and reliability in delivering high-quality data. With a problem-solving mindset and a data-driven approach , we aim to build scalable solutions that enhance business processes and improve overall user experience. About The Role Elsevier is looking for a Senior Analyst to join the Consumption Domain team, where you will play a crucial role in analyzing and interpreting user engagement and content consumption trends. The ideal candidate will possess strong technical expertise in data analytics, Databricks, ETL processes, and cloud storage, coupled with a passion for using data to drive meaningful business decisions. Responsibilities Analyze and interpret large datasets to provide actionable business insights. Leverage Databricks, working with RDDs, Data Frames, and Datasets to optimize data workflows. Design and implement ETL processes, job automation, and data optimization strategies. Work with structured, unstructured, and semi-structured data types, including JSON, XML, and RDF. Manage various file formats (Parquet, Delta files, ZIP files) and handle data storage within DBFS, FileStore, and cloud storage solutions (AWS S3, Google Cloud Storage, etc.). Write efficient SQL, Python, or Scala scripts to extract and manipulate data. Develop insightful dashboards using Tableau or Power BI to visualize key trends and performance metrics. Collaborate with cross-functional teams to drive data-backed decision-making. Maintain best practices in data governance, utilizing platforms such as Snowflake and Collibra. Participate in Agile development methodologies, using tools like Jira, Confluence. Ensure proper version control using GitHub. Requirements Bachelor’s degree in Computer Science, Data Science, or Statistics. Minimum of 4-5 years of experience in data analytics, preferably in a publishing environment. Proven expertise in Databricks, including knowledge of RDDs, DataFrames, and Datasets. Strong understanding of ETL processes, data optimization, job automation, and Delta Lake. Proficiency in handling structured, unstructured, and semi-structured data. Experience with various file types, including Parquet, Delta, and ZIP files. Familiarity with DBFS, FileStore, and cloud storage solutions such as AWS S3, Google Cloud Storage. Strong programming skills in SQL, Python, or Scala. Experience creating dashboards using Tableau or Power BI is a plus. Knowledge of Snowflake, Collibra, JSON-LD, SHACL, SPARQL is an advantage. Familiarity with Agile development methodologies, including Jira and Confluence. Experience with GitLab for version control is beneficial. Skills And Competencies Ability to handle large datasets efficiently. Strong analytical and problem-solving skills. Passion for data-driven decision-making and solving business challenges. Eagerness to learn new technologies and continuously improve processes. Effective communication and data storytelling abilities. Experience collaborating with cross-functional teams. Project management experience in software systems is a plus. Work in a way that works for you We promote a healthy work/life balance across the organization. We offer an appealing working prospect for our people. With numerous wellbeing initiatives, shared parental leave, study assistance, and sabbaticals, we will help you meet your immediate responsibilities and your long-term goals. Working for You We understand that your well-being and happiness are essential to a successful career. Here are some benefits we offer: Comprehensive Health Insurance: Covers you, your immediate family, and parents. Enhanced Health Insurance Options: Competitive rates negotiated by the company. Group Life Insurance: Ensuring financial security for your loved ones. Group Accident Insurance: Extra protection for accidental death and permanent disablement. Flexible Working Arrangement: Achieve a harmonious work-life balance. Employee Assistance Program: Access support for personal and work-related challenges. Medical Screening: Your well-being is a top priority. Modern Family Benefits: Maternity, paternity, and adoption support. Long-Service Awards: Recognizing dedication and commitment. New Baby Gift: Celebrating the joy of parenthood. Subsidized Meals in Chennai: Enjoy delicious meals at discounted rates. Various Paid Time Off: Take time off with Casual Leave, Sick Leave, Privilege Leave, Compassionate Leave, Special Sick Leave, and Gazetted Public Holidays. Free Transport pick up and drop from the home -office - home (applies in Chennai)" About The Business A global leader in information and analytics, we help researchers and healthcare professionals advance science and improve health outcomes for the benefit of society. Building on our publishing heritage, we combine quality information and vast data sets with analytics to support visionary science and research, health education and interactive learning, as well as exceptional healthcare and clinical practice. What you do every day will help advance science and healthcare to advance human progress.
Posted 1 week ago
3.0 years
0 Lacs
pune, maharashtra, india
Remote
Requisition ID: 396629 Work Area: Software-Design and Development Expected Travel: 0 - 10% Career Status: Professional Employment Type: Regular Full Time Career Level: T3 Hiring Manager: Babu Tammisetti Recruiter Name: Devaraj Malkhedkar Additional Locations: The SAP HANA Database and Analytics Core engine team is looking for an intermediate, or senior developer to contribute to our Knowledge Graph Database System engine development. In this role, you will be designing, developing features, and maintaining our Knowledge Graph engine, which runs inside SAP HANA in-memory database. At SAP, all members of the engineering team, including management, are hands-on and close to the code. If you think you can thrive in such an environment, and you have the necessary skills and experience please do not hesitate to apply. What You’ll Do- As a developer, you will have the opportunity to: Contribute to hands-on coding, design, and architecture that is best suited for our team size and performance targets. Collaborate in a team environment that extends to colleagues in remote locations and from various lines of businesses within the company. Ability to communicate and guide other teams to construct best possible queries for their needs. Assess new technology, tool, and infrastructure to keep up with the rapid pace of change. Embrace lean and agile software development principles. Debug, troubleshoot and communicate with customers about their issues with their data models, and queries. Continually enhance existing skills and seek new areas for personal development. What You Bring- Bachelor’s degree or equivalent university education in computer science or engineering with 3-5 years of experience in developing enterprise class software. Experience in Development with modern C++. Knowledge of development of Database Internals like - Query Optimizer/Planner, Query Executor, System Management, Transaction Management, and/or Persistence. Knowledge of SQL, and Graph technologies like RDF/SPARQL. Knowledge of full SDLC and development of tests using Python or other tools. Experience designing and developing well-encapsulated, and object-oriented code. Solution-oriented and open minded. Manage collaboration with sister teams and partner resources in remote locations. High service and customer orientation Skilled in process optimization and drives for permanent change. Strong in analytical thinking/problem solving. Interpersonal skills: team player, proactive networking, results and execution oriented, motivated to work in an international and intercultural environment. Excellent oral and written communication skills and presentation skills MEET YOUR TEAM- The team is responsible for developing HANA Knowledge Graph, a high-performance graph analytics database system, made available to SAP customers, partners, and various internal groups as part of HANA Multi Model Database System. It is specifically designed for processing large-scale graph data and executing complex graph queries with high efficiency. HANA Knowledge Graph enables organizations to gain insights from their graph datasets, discover patterns, perform advanced graph analytics, and unlock the value of interconnected data. HANA Knowledge Graph utilizes massive parallel processing (MPP) architecture to leverage the power of distributed computing. It is built with W3C web standards specifications of graph data and query language – RDF and SPARQL. The various components of HANA Knowledge Graph System include – Storage, Data Load, Query Parsing, Query Planning and Optimization, Query Execution, Transaction Management, Memory Management, Network Communications, System Management, Data Persistence, Backup & Restore, Performance Tuning, etc. At SAP, HANA Knowledge Graph is set to play a critical role in the development of several AI products. Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 396629 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: .
Posted 1 week ago
0 years
2 - 4 Lacs
gurgaon
On-site
About the Role: Grade Level (for internal use): 11 The Team : The Data Catalog - Metadata Management team, within the Enterprise Data Organization, is dedicated to enhancing the usability and accessibility of data assets. We focus on organizing and categorizing data, ensuring the accuracy of metadata, and promoting data discoverability and machine readability. Our collaborative environment values innovation and continuous learning, providing team members with opportunities to contribute to impactful data management solutions and to grow their skills in a global setting. Responsibilities and Impact : Assist in the organization and categorization of data assets within the data catalog. Assist in the design, development, and implementation of the data catalog, defining functionalities and ensuring alignment with overall data governance goals. Ensure the accuracy and consistency of metadata across all data entries. Promote data discoverability and machine readability to enhance data usability. Collaborate with data scientists, business stewards, and technical data stewards to ensure requirements are met for data catalog integration. Collaborate with cross-functional teams to maintain and improve data catalog standards. Support the development and implementation of data catalog policies and procedures. Participate in training sessions to educate stakeholders on data catalog functionalities and best practices. What We’re Looking For : Basic Required Qualifications : Bachelor’s degree in Information Science, Data Management, Knowledge Engineering or a related field. Strong analytical and problem-solving skills. Familiarity with data management principles and metadata standards. Excellent attention to detail and accuracy. Ability to work collaboratively in a team environment. Key Soft Skills : Effective communication skills, both written and verbal. Strong organizational and time management skills. Proactive and eager to learn new technologies and methodologies. Ability to adapt to changing priorities and work in a fast-paced environment. Additional Preferred Qualifications : Experience working with data catalog tools and platforms. Knowledge of graph databases and understanding of SQL and SPARQL. Familiarity with data governance and data stewardship practices. Understanding of machine learning concepts and their application in metadata management. Experience with data visualization tools and techniques. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 103 - Middle Management (EEO Job Group) (inactive), 10 - Officials or Managers (EEO-2 Job Categories-United States of America), SLSGRP103.2 - Middle Management Tier II (EEO Job Group) Job ID: 319118 Posted On: 2025-09-02 Location: Hyderabad, Telangana, India
Posted 1 week ago
0 years
2 - 4 Lacs
hyderābād
On-site
About the Role: Grade Level (for internal use): 11 The Team : The Data Catalog - Metadata Management team, within the Enterprise Data Organization, is dedicated to enhancing the usability and accessibility of data assets. We focus on organizing and categorizing data, ensuring the accuracy of metadata, and promoting data discoverability and machine readability. Our collaborative environment values innovation and continuous learning, providing team members with opportunities to contribute to impactful data management solutions and to grow their skills in a global setting. Responsibilities and Impact : Assist in the organization and categorization of data assets within the data catalog. Assist in the design, development, and implementation of the data catalog, defining functionalities and ensuring alignment with overall data governance goals. Ensure the accuracy and consistency of metadata across all data entries. Promote data discoverability and machine readability to enhance data usability. Collaborate with data scientists, business stewards, and technical data stewards to ensure requirements are met for data catalog integration. Collaborate with cross-functional teams to maintain and improve data catalog standards. Support the development and implementation of data catalog policies and procedures. Participate in training sessions to educate stakeholders on data catalog functionalities and best practices. What We’re Looking For : Basic Required Qualifications : Bachelor’s degree in Information Science, Data Management, Knowledge Engineering or a related field. Strong analytical and problem-solving skills. Familiarity with data management principles and metadata standards. Excellent attention to detail and accuracy. Ability to work collaboratively in a team environment. Key Soft Skills : Effective communication skills, both written and verbal. Strong organizational and time management skills. Proactive and eager to learn new technologies and methodologies. Ability to adapt to changing priorities and work in a fast-paced environment. Additional Preferred Qualifications : Experience working with data catalog tools and platforms. Knowledge of graph databases and understanding of SQL and SPARQL. Familiarity with data governance and data stewardship practices. Understanding of machine learning concepts and their application in metadata management. Experience with data visualization tools and techniques. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 103 - Middle Management (EEO Job Group) (inactive), 10 - Officials or Managers (EEO-2 Job Categories-United States of America), SLSGRP103.2 - Middle Management Tier II (EEO Job Group) Job ID: 319118 Posted On: 2025-09-02 Location: Hyderabad, Telangana, India
Posted 1 week ago
0.0 years
0 Lacs
hyderabad, telangana
On-site
About the Role: Grade Level (for internal use): 11 The Team : The Data Catalog - Metadata Management team, within the Enterprise Data Organization, is dedicated to enhancing the usability and accessibility of data assets. We focus on organizing and categorizing data, ensuring the accuracy of metadata, and promoting data discoverability and machine readability. Our collaborative environment values innovation and continuous learning, providing team members with opportunities to contribute to impactful data management solutions and to grow their skills in a global setting. Responsibilities and Impact : Assist in the organization and categorization of data assets within the data catalog. Assist in the design, development, and implementation of the data catalog, defining functionalities and ensuring alignment with overall data governance goals. Ensure the accuracy and consistency of metadata across all data entries. Promote data discoverability and machine readability to enhance data usability. Collaborate with data scientists, business stewards, and technical data stewards to ensure requirements are met for data catalog integration. Collaborate with cross-functional teams to maintain and improve data catalog standards. Support the development and implementation of data catalog policies and procedures. Participate in training sessions to educate stakeholders on data catalog functionalities and best practices. What We’re Looking For : Basic Required Qualifications : Bachelor’s degree in Information Science, Data Management, Knowledge Engineering or a related field. Strong analytical and problem-solving skills. Familiarity with data management principles and metadata standards. Excellent attention to detail and accuracy. Ability to work collaboratively in a team environment. Key Soft Skills : Effective communication skills, both written and verbal. Strong organizational and time management skills. Proactive and eager to learn new technologies and methodologies. Ability to adapt to changing priorities and work in a fast-paced environment. Additional Preferred Qualifications : Experience working with data catalog tools and platforms. Knowledge of graph databases and understanding of SQL and SPARQL. Familiarity with data governance and data stewardship practices. Understanding of machine learning concepts and their application in metadata management. Experience with data visualization tools and techniques. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 103 - Middle Management (EEO Job Group) (inactive), 10 - Officials or Managers (EEO-2 Job Categories-United States of America), SLSGRP103.2 - Middle Management Tier II (EEO Job Group) Job ID: 319118 Posted On: 2025-09-02 Location: Hyderabad, Telangana, India
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be joining SAP, a company that is focused on enabling you to bring out your best and help the world run better. The company culture emphasizes collaboration and a shared passion for creating a workplace that values differences, embraces flexibility, and is aligned with purpose-driven and future-focused work. At SAP, you will experience a highly collaborative and caring team environment that prioritizes learning and development, acknowledges individual contributions, and provides a variety of benefit options for you to choose from. As a Knowledge Engineer (f/m/d) for Enterprise Knowledge Graphs at SAP, you will have the opportunity to contribute to the development of Knowledge Graphs as a source of explicit knowledge across multiple SAP domains. Your role will involve supporting the integration of Knowledge Graphs in various tasks such as Generative AI applications, designing and building large-scale Knowledge Graphs using business data, and collaborating with Knowledge and Data engineering teams and stakeholders to meet requirements. To excel in this role, you should have a Bachelors or Masters degree in computer science, artificial intelligence, physics, mathematics, or related disciplines. Professional experience in Knowledge Graphs and their application in a business context would be advantageous. Knowledge of RDF Knowledge Graph technology stack, semantic/knowledge modeling, and experience with Knowledge Graph databases are desirable. Additionally, familiarity with latest trends in Knowledge Graphs, data science knowledge, Python proficiency, and strong communication and collaboration skills are essential for this role. The AI organization at SAP is dedicated to seamlessly infusing AI into all enterprise applications, allowing customers, partners, and developers to enhance business processes and generate significant business value. By joining the international AI team at SAP, you will be part of an innovative environment with ample opportunities for personal development and global collaboration. At SAP, inclusivity, health, well-being, and flexible working models are emphasized to ensure that every individual, regardless of background, feels included and can perform at their best. The company values diversity and unique capabilities, investing in employees to inspire confidence and help them realize their full potential. SAP is an equal opportunity workplace and an affirmative action employer committed to creating a better and more equitable world. If you are interested in applying for employment at SAP and require accommodation or special assistance, please reach out to the Recruiting Operations Team at Careers@sap.com. SAP employees can also explore roles eligible for the SAP Employee Referral Program under specific conditions outlined in the SAP Referral Policy. Background verification with an external vendor may be required for successful candidates. Join SAP, where you can bring out your best and contribute to innovations that help customers worldwide work more efficiently and effectively, ensuring challenges receive the solutions they deserve.,
Posted 2 weeks ago
2.0 - 6.0 years
2 - 3 Lacs
pune
On-site
Hello Visionary! We know that the only way a business thrive is if our people are growing. That’s why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you? We are looking for a Java Full stack Developer. You’ll make a difference by: Having Expertise in Strong proficiency in Java and Python. Having Proven experience working with RDF graphs, writing and optimizing SPARQL queries, and developing ontologies using OWL and SHACL. Having Solid understanding and practical experience with RDF reasoning, including rule-based inference, consistency checks, and the use of OWL reasoners. Having Demonstrated experience in designing and implementing robust RESTful APIs and interfaces. Having Strong foundation in software engineering best practices, including Git version control, clean code principles, unit testing, and active participation in code reviews. Having Proficiency in data modeling, particularly with UML class diagrams, and a strong eagerness to learn and apply OWL for ontology modeling. Having Excellent abstract thinking skills, with the ability to translate complex requirements into effective data models and semantic solutions. Having Ability to acquire and apply domain expertise, particularly in modeling templates for systems and equipment. You’ll win us over by: Holding a graduate BE / B.Tech / MCA/M.Tech/M.Sc with good academic record. 2 - 6 years of demonstrable experience in Java and Python Development. Familiarity with semantic web frameworks and libraries such as Apache Jena and rdflib. Hands-on experience with graph databases, specifically GraphDB. Knowledge of SHACL rules, performance tuning of shapes, and advanced reasoning techniques. Experience with Linked Data principles and formats, including JSON-LD creation and parsing. Create a better #TomorrowWithUs! This role, based in Pune, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at: www.siemens.com/careers
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra, india
On-site
Hello Visionary! We know that the only way a business thrive is if our people are growing. That’s why we always put our people first. Our global, diverse team would be happy to support you and challenge you to grow in new ways. Who knows where our shared journey will take you? We are looking for a Java Full stack Developer. You’ll make a difference by: Having Expertise in Strong proficiency in Java and Python. Having Proven experience working with RDF graphs, writing and optimizing SPARQL queries, and developing ontologies using OWL and SHACL. Having Solid understanding and practical experience with RDF reasoning, including rule-based inference, consistency checks, and the use of OWL reasoners. Having Demonstrated experience in designing and implementing robust RESTful APIs and interfaces. Having Strong foundation in software engineering best practices, including Git version control, clean code principles, unit testing, and active participation in code reviews. Having Proficiency in data modeling, particularly with UML class diagrams, and a strong eagerness to learn and apply OWL for ontology modeling. Having Excellent abstract thinking skills, with the ability to translate complex requirements into effective data models and semantic solutions. Having Ability to acquire and apply domain expertise, particularly in modeling templates for systems and equipment. You’ll win us over by: Holding a graduate BE / B.Tech / MCA/M.Tech/M.Sc with good academic record. 2 - 6 years of demonstrable experience in Java and Python Development. Familiarity with semantic web frameworks and libraries such as Apache Jena and rdflib. Hands-on experience with graph databases, specifically GraphDB. Knowledge of SHACL rules, performance tuning of shapes, and advanced reasoning techniques. Experience with Linked Data principles and formats, including JSON-LD creation and parsing. Create a better #TomorrowWithUs! This role, based in Pune, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow Find out more about Siemens careers at: www.siemens.com/careers
Posted 2 weeks ago
0.0 - 4.0 years
0 Lacs
hyderabad, telangana
On-site
The job involves designing architectures for meta-learning, self-reflective agents, and recursive optimization loops. Building simulation frameworks grounded in Bayesian dynamics, attractor theory, and teleo-dynamics. Developing systems that integrate graph rewriting, knowledge representation, and neurosymbolic reasoning. Researching fractal intelligence structures, swarm-based agent coordination, and autopoietic systems. Advancing Mobius's knowledge graph with ontologies supporting logic, agency, and emergent semantics. Integrating logic into distributed decision graphs aligned with business and ethical constraints. Publishing cutting-edge results and mentoring contributors in reflective system design and emergent AI theory. Building scalable simulations of multi-agent ecosystems within the Mobius runtime. You should have a Ph.D. or M.Tech in Artificial Intelligence, Cognitive Science, Complex Systems, Applied Mathematics, or equivalent experience. Proven expertise in meta-learning, recursive architectures, and AI safety. Strong knowledge of distributed systems, multi-agent environments, and decentralized coordination. Proficiency in formal and theoretical foundations like Bayesian modeling, graph theory, and logical inference. Strong implementation skills in Python, additional proficiency in C++, functional or symbolic languages are a plus. A publication record in areas intersecting AI research, complexity science, and/or emergent systems is required. Preferred qualifications include experience with neurosymbolic architectures, hybrid AI systems, fractal modeling, attractor theory, complex adaptive dynamics, topos theory, category theory, logic-based semantics, knowledge ontologies, OWL/RDF, semantic reasoners, autopoiesis, teleo-dynamics, biologically inspired system design, swarm intelligence, self-organizing behavior, emergent coordination, distributed learning systems like Ray, Spark, MPI, or agent-based simulators. Technical proficiency required in Python, preferred in C++, Haskell, Lisp, or Prolog for symbolic reasoning. Familiarity with frameworks like PyTorch, TensorFlow, distributed systems like Ray, Apache Spark, Dask, Kubernetes, knowledge technologies including Neo4j, RDF, OWL, SPARQL, experiment management tools such as MLflow, Weights & Biases, GPU and HPC systems like CUDA, NCCL, Slurm, and formal modeling tools like Z3, TLA+, Coq, Isabelle. Core research domains include recursive self-improvement and introspective AI, graph theory, graph rewriting, knowledge graphs, neurosymbolic systems, ontological reasoning, fractal intelligence, dynamic attractor-based learning, Bayesian reasoning, cognitive dynamics, swarm intelligence, decentralized consensus modeling, topos theory, autopoietic system architectures, teleo-dynamics, and goal-driven adaptation in complex systems.,
Posted 2 weeks ago
16.0 - 20.0 years
0 Lacs
karnataka
On-site
Are you ready to help shape the future of healthcare Join GSK, a global biopharma company with a special purpose to unite science, technology, and talent to get ahead of disease together. GSK aims to positively impact the health of billions of people and deliver stronger, more sustainable shareholder returns. As an organization where people can thrive, GSK is committed to preventing and treating diseases on a global scale. By joining GSK at this exciting moment, you can contribute to the mission of getting Ahead Together. As the Principal Data Engineer at GSK, you will play a crucial role in transforming the commercial manufacturing and supply chain organization. Your responsibilities will include increasing capacity and speed for transferring new products from the R&D organization. Data and AI are essential components in achieving these goals, ultimately helping to launch medicines quicker and have a positive impact on patients. The primary purpose of your role is to take technical accountability for the CMC Knowledge Graph. You will drive forward its design and implementation by providing technical direction and oversight to the development team. Additionally, you will collaborate with Product Management, business representatives, and other Tech & Data experts to ensure that the CMC Knowledge Graph meets the business requirements. Your role will involve supporting the CMC Knowledge System Director, product managers, business leaders, and stakeholders in identifying opportunities where Knowledge Graph and other Data & AI capabilities can transform GSK's CMC and New Product Introduction processes. You will provide technical leadership for other Data & AI products in the CMC/NPI portfolio. Your immediate priority will be to lead the technical work required to transition an existing proof-of-concept CMC Knowledge Graph and its associated analytics use-cases into a full-fledged, sustainable Data & AI product. This will involve leading the technical design, development, testing, and release of the CMC Knowledge Graph and other Data & AI solutions in the CMC/NPI portfolio. To succeed in this role, you should have a proven track record in delivering complex data engineering projects in a cloud environment, preferably Azure, with a total of 16+ years of experience. Strong technical expertise in designing, developing, and supporting Knowledge Graphs is essential, along with proficiency in working with graph technologies such as RDF, OWL, SPARQL, and Cypher. Experience in leading and managing technical teams, data modeling/ontologies, data integration, data transformation techniques, programming skills, and familiarity with DevOps principles and CI/CD practices are also required. If you possess an understanding of pharmaceutical industry data, domain knowledge within CMC, and knowledge of GxP compliance requirements, it would be a plus. By joining GSK, you will be part of a global biopharma company that is dedicated to uniting science, technology, and talent to get ahead of disease together. GSK focuses on preventing and treating diseases with vaccines, specialty and general medicines, and invests in core therapeutic areas such as infectious diseases, HIV, respiratory/immunology, and oncology. If you are looking for a place where you can be inspired, encouraged, and challenged to be the best you can be, join GSK on this journey to get Ahead Together.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |