Jobs
Interviews

3 Graph Modeling Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Engineer at CLOUDSUFI, a Google Cloud Premier Partner, you will be responsible for designing, developing, and deploying graph database solutions using Neo4j for economic data analysis and modeling. Your expertise in graph database architecture, data pipeline development, and production system deployment will play a crucial role in this position. Your key responsibilities will include designing and implementing Neo4j graph database schemas for complex economic datasets, developing efficient graph data models, creating and optimizing Cypher queries, building graph-based data pipelines for real-time and batch processing, architecting scalable data ingestion frameworks, developing ETL/ELT processes, implementing data validation and monitoring systems, and building APIs and services for graph data access and manipulation. In addition, you will be involved in deploying and maintaining Neo4j clusters in production environments, implementing backup and disaster recovery solutions, monitoring database performance, optimizing queries, managing capacity planning, and establishing CI/CD pipelines for graph database deployments. You will also collaborate with economists and analysts to translate business requirements into graph solutions. To be successful in this role, you should have 5-10 years of experience, with a background in BTech / BE / MCA / MSc Computer Science. You should have expertise in Neo4j database development, graph modeling, Cypher Query Language, programming languages such as Python, Java, or Scala, data pipeline tools like Apache Kafka and Apache Spark, and cloud platforms like AWS, GCP, or Azure with containerization. Experience with graph database administration, performance tuning, distributed systems, database clustering, data warehousing concepts, and dimensional modeling will be beneficial. Knowledge of financial datasets, market data, economic indicators, data governance, and compliance in financial services is also desired. Preferred qualifications include Neo4j Certification, a Master's degree in Computer Science, Economics, or a related field, 5+ years of industry experience in financial services or economic research, and additional skills in machine learning on graphs, network analysis, and time-series analysis. You will work in a technical environment that includes Neo4j Enterprise Edition with APOC procedures, Apache Kafka, Apache Spark, Docker, Kubernetes, Git, Jenkins/GitLab CI, and monitoring tools like Prometheus, Grafana, and ELK stack. Your application should include a portfolio demonstrating Neo4j graph database projects, examples of production graph systems you've built, and experience with economic or financial data modeling.,

Posted 1 month ago

Apply

10.0 - 14.0 years

10 - 14 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities: Design and implement scalable knowledge graph solutions using Neo4j . Write efficient and optimized Cypher queries for data retrieval and manipulation. Develop data pipelines to ingest, transform, and load data into graph databases. Collaborate with data scientists, architects, and domain experts to model complex relationships. Deploy and manage graph database solutions on AWS infrastructure. Ensure data quality, consistency, and security across the knowledge graph. Monitor performance and troubleshoot issues in graph-based applications. Stay updated with the latest trends and advancements in graph technologies and cloud services. Required Skills & Qualifications: Proven experience with Neo4j and Cypher query language . Strong understanding of graph theory , data modeling , and semantic relationships . Hands-on experience with AWS services such as EC2, S3, Lambda, RDS, and IAM. Proficiency in Python , Java , or Scala for data processing and integration. Experience with ETL pipelines , data integration , and API development . Familiarity with RDF , SPARQL , or other semantic web technologies is a plus. Excellent problem-solving skills and attention to detail. Strong communication and collaboration abilities.

Posted 2 months ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies