Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
0 Lacs
karnataka
On-site
We are seeking a talented Knowledge Neo4J Developer to join our team as a key member of the data engineering team. Your primary responsibility will be the design, implementation, and optimization of graph databases, aimed at efficiently storing and retrieving high-dimensional data. You will have the opportunity to work with cutting-edge technologies in locations such as Hyderabad, Pune, Gurugram, and Bangalore. The major skills required for this role include expertise in Neo4j, Cypher, Python, and Bigdata tools such as Hadoop, Hive, and Spark. As a Knowledge Neo4J Developer, your responsibilities will include designing, building, and enhancing the client's online platform. You will leverage Neo4j to create and manage knowledge graphs, ensuring optimal performance and scalability. Additionally, you will research, propose, and implement new technology solutions following best practices and standards. Your role will involve developing and maintaining knowledge graphs using Neo4j, integrating graph databases with existing infrastructure, and providing support for query optimization and data modeling. To excel in this position, you should have a minimum of 5-10 years of experience in data engineering, proficiency in query languages like Cypher or Gremlin, and a strong foundation in graph theory. Experience with Bigdata tools is essential, along with excellent written and verbal communication skills, superior analytical and problem-solving abilities, and a preference for working in dual shore engagement setups. If you are interested in this exciting opportunity, please share your updated resume with us at francis@lorventech.com.,
Posted 20 hours ago
4.0 - 8.0 years
0 Lacs
navi mumbai, maharashtra
On-site
Spherex is seeking an Artificial Intelligence (AI) Machine Learning (ML) Engineer to contribute to the development, enhancement, and expansion of our product platform catering to the Media and Entertainment sector. As the AI/ML Engineer, your duties will involve the creation of machine learning models and system retraining. The position is based in Navi Mumbai, India. The ideal candidate should hold a degree in computer science or software development. Proficiency in .Net, Azure, Project management, Team and Client management is essential. Additionally, familiarity with Python, Tensorflow, Pytorch, MySQL, Artificial Intelligence, and Machine Learning is desired. Key requirements for this role include expertise in Python with OOPS concepts, a solid foundation in Natural Language Understanding, Machine Learning, and Artificial Intelligence. Knowledge of ML/DL libraries such as Numpy, Pandas, Tensorflow, Pytorch, Keras, scikit-learn, Jupyter, and spaCy/NLTK is crucial. Hands-on experience with MySQL and NoSQL databases, along with proficiency in scraping tools like BeautifulSoup and Scrapy, is also required. The successful candidate should have experience in web development frameworks like Django and Flask, as well as working with RESTful APIs using Django. Familiarity with end-to-end data science pipelines, strong unit testing and debugging abilities, and applied statistical skills are necessary. Proficiency in Git, Linux OS, ML architectures, and approaches including object detection, semantic segmentation, classification, regression, RNNs, and data fusion is expected. Knowledge of OpenCV, OCR, Yolo, Docker, Kubernetes, ETLPentaho is considered a plus. Candidates must possess a minimum of 4+ years of experience in advanced AI/ML projects within commercial environments. Experience in utilizing AI/ML for video and audio content analysis is advantageous. Education-wise, a college degree in computer science or software development is required, along with excellent documentation and effective communication skills in both technical and non-technical contexts.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data and Solution Architect at our company, you will play a crucial role in participating in requirements definition, analysis, and designing logical and physical data models for various data models such as Dimensional Data Model, NoSQL, or Graph Data Model. You will lead data discovery discussions with the Business in Joint Application Design (JAD) sessions and translate business requirements into logical and physical data modeling solutions. It will be your responsibility to conduct data model reviews with project team members and capture technical metadata using data modeling tools. Your expertise will be essential in ensuring that the database designs efficiently support Business Intelligence (BI) and end-user requirements. You will collaborate closely with ETL/Data Engineering teams to create data process pipelines for data ingestion and transformation. Additionally, you will work with Data Architects for data model management, documentation, and version control. Staying updated with industry trends and standards will be crucial in driving continual improvement and enhancement of existing systems. To excel in this role, you must possess strong data analysis and data profiling skills. Your experience in conceptual, logical, and physical data modeling for Very Large Database (VLDB) Data Warehouse and Graph DB will be highly valuable. Hands-on experience with modeling tools like ERWIN or other industry-standard tools is required. Proficiency in both normalized and dimensional model disciplines and techniques is essential. A minimum of 3 years" experience in Oracle Database along with hands-on experience in Oracle SQL, PL/SQL, or Cypher is expected. Exposure to tools such as Databricks Spark, Delta Technologies, Informatica ETL, and other industry-leading tools will be beneficial. Good knowledge or experience with AWS Redshift and Graph DB design and management is desired. Working knowledge of AWS Cloud technologies, particularly on services like VPC, EC2, S3, DMS, and Glue, will be advantageous. You should hold a Bachelor's degree in Software Engineering, Computer Science, or Information Systems (or equivalent experience). Excellent verbal and written communication skills are necessary, including the ability to describe complex technical concepts in relatable terms. Your ability to manage and prioritize multiple workstreams confidently and make decisions about prioritization will be crucial. A data-driven mentality, self-motivation, responsibility, conscientiousness, and detail-oriented approach are highly valued. In terms of education and experience, a Bachelor's degree in Computer Science, Engineering, or relevant fields along with 3+ years of experience as a Data and Solution Architect supporting Enterprise Data and Integration Applications or a similar role for large-scale enterprise solutions is required. You should have at least 3 years of experience in Big Data Infrastructure and tuning experience in Lakehouse Data Ecosystem, including Data Lake, Data Warehouses, and Graph DB. Possessing AWS Solutions Architect Professional Level certifications will be advantageous. Extensive experience in data analysis on critical enterprise systems like SAP, E1, Mainframe ERP, SFDC, Adobe Platform, and eCommerce systems is preferred. If you are someone who thrives in a dynamic environment and enjoys collaborating with enthusiastic individuals, this role is perfect for you. Join our team and be a part of our exciting journey towards innovation and excellence!,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As a Senior Backend Development Engineer in the DevX (Developer Experience) group at Cisco, you will play a crucial role in overseeing the development, optimization, and maintenance of a highly scalable Resource Services application based on a microservices architecture. In this role, you will have a direct impact on creating intent-based lab test beds on demand. Your responsibilities will include hands-on coding, technical leadership by mentoring junior engineers, and applying your expertise to solve challenging programming and design problems. You will drive the design and implementation of reliable and scalable backend software solutions that address critical customer issues in collaboration with various other services. Ensuring high quality backend code, conducting code reviews, and writing unit and integration tests will be integral parts of your role. Additionally, you will own the design and architecture of product components, upholding high standards and best practices for architecture, design, coding, and CI/CD processes. To excel in this role, you should hold a BS or MS in Computer Science or a relevant field with a minimum of 10 years of experience, including 5+ years in cloud-native microservices-based architectures and distributed systems development. Proficiency in writing production-quality code and test cases in Golang and Python is essential. Your expertise in distributed systems, including understanding challenges related to message handling and memory management, will be highly valuable. Autonomy, feature ownership, and the ability to work collaboratively with the team to drive features to completion are key aspects of this role. It is desirable to have practical knowledge of Kubernetes, experience with brokers or pubsub technologies, and familiarity with network services such as NSO and network monitoring. Knowledge of SQL/Cypher database queries, REST and gRPC APIs, and a "can do" attitude towards problem-solving are also important qualities for this role. Your ability to contribute to code reviews, identify potential issues, and improve workflows through scripts/tools will be beneficial to the team. At Cisco, we value diversity, connection, and inclusivity. We celebrate the unique skills and perspectives each individual brings, fostering a culture of learning, development, and collaboration. Our employees are encouraged to explore multiple career paths within the organization, supported by cutting-edge technology and tools that enable hybrid work trends. As part of our commitment to social responsibility, we offer dedicated paid time off for volunteering and actively engage in various employee resource organizations to foster belonging, allyship, and community impact. Join us at Cisco to be part of a purpose-driven organization that leads the way in technology innovation, powering an inclusive future for all. Embrace the opportunity to reimagine applications, secure enterprises, transform infrastructure, and drive sustainability goals while contributing to a more inclusive and connected world. Take the next step in your career journey with us and unleash your potential at Cisco!,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
As an ITIDATA, an EXl Company, you will be responsible for utilizing Cypher or Gremlin query languages, Neo4J, Python, PySpark, Hive, and Hadoop to work on tasks related to graph theory. Specifically, your role will involve creating and managing knowledge graphs using Neo4J. We are seeking Neo4J Developers with 7-10 years of experience in data engineering, including 2-3 years of hands-on experience with Neo4J. If you are looking for an exciting opportunity in graph databases, this position is ideal for you. Key Skills & Responsibilities: - Expertise in Cypher or Gremlin query languages - Strong understanding of graph theory - Experience in creating and managing knowledge graphs using Neo4J - Optimizing performance and scalability of graph databases - Researching & implementing new technology solutions - Working with application teams to integrate graph database solutions Candidates who can be available immediately or within 30 days will be given preference. Join us and be a part of our dynamic team working on cutting-edge graph database technologies.,
Posted 2 weeks ago
4.0 - 9.0 years
25 - 40 Lacs
Pune, Chennai
Hybrid
Hi, Wishes from GSN! Pleasure connecting with you. About the job: This is a golden opportunity with a leading BigTech IT Services company, a valued client of GSN HR. Exp Range : 4+ yrs Work Loc : PUNE Work Mode : WFO - Hybrid Work Timing : General CTC Range : 25LPA to 40 LPA ******** Looking for SHORT JOINERS ******** Required Skills : Neo4j Expertise : 4+ yrs of Proven, in-depth EXP with Neo4j, including its core concepts (nodes, relationships, properties, labels), architectural components, and deployment models (standalone, causal cluster). Strong in Cypher query language for complex graph traversals, pattern matching and data manipulation. Strong understanding of Neo4j indexing strategies (schema indexes, full-text indexes) and their impact on query performance. Graph Database Solutions: Strong EXP in designing, implementing and maintaining scalable graph database solutions and architectures. Familiarity with graph theory concepts, graph data modeling principles , and their application in real-world scenarios. ******** Looking for SHORT JOINERS ******** If this role excites you and aligns with your aspirations, dont hesitate to call me @ 9840035825 directly or click APPLY . Lets explore this opportunity together! Best Regards, Ananth | GSN | 9840035825 | Google review : https://g.co/kgs/UAsF9W
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
chennai, tamil nadu
On-site
As an ITIDATA, an EXl Company, you will be responsible for tasks including working with Cypher or Gremlin query languages, Neo4J, Python, PySpark, Hive, and Hadoop. Your expertise in graph theory will be utilized to create and manage knowledge graphs using Neo4J effectively. In this role, we are looking for Neo4J Developers with 7-10 years of experience in data engineering, specifically with 2-3 years of hands-on experience with Neo4J. If you are seeking an exciting opportunity in graph databases, this position offers the chance to work on optimizing performance and scalability of graph databases, as well as researching and implementing new technology solutions. Key Skills & Responsibilities: - Expertise in Cypher or Gremlin query languages - Strong understanding of graph theory - Experience in creating and managing knowledge graphs using Neo4J - Optimizing performance and scalability of graph databases - Researching & implementing new technology solutions - Working with application teams to integrate graph database solutions We are looking for candidates who are available immediately or within 30 days to join our team and contribute to our dynamic projects.,
Posted 2 weeks ago
5.0 - 8.0 years
30 - 40 Lacs
Bengaluru
Work from Office
Data Engineer and Developer We are seeking a Data Engineer who will define and build the foundational architecture for our data platform the bedrock upon which our applications will thrive. You all will collaborate closely with application developers, translating their needs into platform capabilities that turbocharge development. From the start, you all will architect for scale, ensuring our data flows seamlessly through every stage of its lifecycle: collection, modeling, cleansing, enrichment, securing, and storing data in an optimal format. Think of yourself as the mastermind orchestrating an evolving data ecosystem, engineered to adapt and excel amid tomorrow's challenges. We are looking for a Data Engineer with 5+ years of experience who has: Database Versatility: Deep expertise working with relational databases (PostgreSQL, MS SQL, and beyond) as well as NoSQL systems (such as MongoDB, Cassandra, Elasticsearch). Graph Database: Design and implement scalable graph databases to model complex relationships between entities for use in GenAI agent architectures using Neo4J, Dgraph, ArangoDB and query languages such as Cypher, SPARQL, GraphQL. Data Lifecycle Expertise: Skilled in all aspects of data management collection, storage, integration, quality, and pipeline design. Programming Proficiency: Adept in programming languages such as Python, Go. Collaborative Mindset: Experienced in partnering with GenAI Engineers and Data Scientists. Modern Data Paradigms: A strong grasp of Data Mesh and Data Products, Data Fabric. Understanding of Data Ops and Domain Driven Design (DDD) is a plus.
Posted 3 weeks ago
10.0 - 14.0 years
10 - 14 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Design and implement scalable knowledge graph solutions using Neo4j . Write efficient and optimized Cypher queries for data retrieval and manipulation. Develop data pipelines to ingest, transform, and load data into graph databases. Collaborate with data scientists, architects, and domain experts to model complex relationships. Deploy and manage graph database solutions on AWS infrastructure. Ensure data quality, consistency, and security across the knowledge graph. Monitor performance and troubleshoot issues in graph-based applications. Stay updated with the latest trends and advancements in graph technologies and cloud services. Required Skills & Qualifications: Proven experience with Neo4j and Cypher query language . Strong understanding of graph theory , data modeling , and semantic relationships . Hands-on experience with AWS services such as EC2, S3, Lambda, RDS, and IAM. Proficiency in Python , Java , or Scala for data processing and integration. Experience with ETL pipelines , data integration , and API development . Familiarity with RDF , SPARQL , or other semantic web technologies is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities.
Posted 3 weeks ago
9.0 - 14.0 years
20 - 25 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Hybrid
We are seeking a highly skilled and motivated Knowledge Graph Engineer to design, develop, and maintain graph-based data solutions using Neo4j , Cypher , and AWS . The ideal candidate will have a strong background in graph databases, data modeling, and cloud infrastructure, with a passion for turning complex data into meaningful insights. Key Responsibilities: Design and implement scalable knowledge graph solutions using Neo4j . Write efficient and optimized Cypher queries for data retrieval and manipulation. Develop data pipelines to ingest, transform, and load data into graph databases. Collaborate with data scientists, architects, and domain experts to model complex relationships. Deploy and manage graph database solutions on AWS infrastructure. Ensure data quality, consistency, and security across the knowledge graph. Monitor performance and troubleshoot issues in graph-based applications. Stay updated with the latest trends and advancements in graph technologies and cloud services. Required Skills & Qualifications: Proven experience with Neo4j and Cypher query language . Strong understanding of graph theory , data modeling , and semantic relationships . Hands-on experience with AWS services such as EC2, S3, Lambda, RDS, and IAM. Proficiency in Python , Java , or Scala for data processing and integration. Experience with ETL pipelines , data integration , and API development . Familiarity with RDF , SPARQL , or other semantic web technologies is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities.
Posted 1 month ago
4.0 - 6.0 years
7 - 10 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be part of Researchs Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation Expertise: Good experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-Solving: Excellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 1 month ago
3.0 - 7.0 years
5 - 9 Lacs
Hyderabad
Work from Office
What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelors degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough