Home
Jobs

356 Neo4J Jobs - Page 6

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will will be at the forefront of innovation, using their skills to design and implement pioneering AI/Gen AI solutions. With an emphasis on creativity, collaboration, and technical excellence, this role provides a unique opportunity to work on ground-breaking projects that enhance operational efficiency at the Amgen Technology and Innovation Centre while ensuring the protection of critical systems and data. Roles & Responsibilities: Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI API, Open Source LLMs (Llama2, Mistral, Mixtral), and frameworks like Langchain and Haystack. Design and implement AI & GenAI solutions that drive productivity across all roles in the software development lifecycle. Demonstrate the ability to rapidly learn the latest technologies and develop a vision to embed the solution to improve the operational efficiency within a product team Collaborate with multi-functional teams (product, engineering, design) to set project goals, identify use cases, and ensure seamless integration of Gen AI solutions into current workflows. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Programming Languages such as Java and Python experience OR Bachelor’s degree and 3 to 5 years of Programming Languages such as Java and Python experience OR Diploma and 7 to 9 years of Programming Languages such as Jav and Python experience Preferred Qualifications: Proficiency in programming languages such as Python and Java. Leverage advanced knowledge of Python open-source software stack such as Django or Flask, Django Rest or FastAPI, etc. Experience working with RAG technologies and LLM frameworks, LLM model registries (Hugging Face), LLM APIs, embedding models, and vector databases Familiarity with cloud security (AWS /Azure/ GCP) Utilize expertise in integrating and demonstrating Gen AI LLMs to maximize operational efficiency.Productivity Tools and Technology Engineer Good-to-Have Skills: Experience with graph databases (Neo4J and Cypher would be a big plus) Experience with Prompt Engineering and familiarity with frameworks such as Dspy would be a big plus Professional Certifications: AWS / GCP / Databricks Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are seeking an experienced AI Solution Architect to lead the design and implementation of AI-driven, cloud-native applications. The ideal candidate will possess deep expertise in Generative AI, Agentic AI, cloud platforms (AWS, Azure, GCP), and modern data engineering practices. This role involves collaborating with cross-functional teams to deliver scalable, secure, and intelligent solutions in a fast-paced, innovation-driven environment. Key Responsibilities: Design and architect AI/ML solutions, including Generative AI, Retrieval-Augmented Generation (RAG), and fine-tuning of Large Language Models (LLMs) using frameworks like LangChain, LangGraph, and Hugging Face. Implement cloud migration strategies for monolithic systems to microservices/serverless architectures using AWS, Azure, and GCP. Lead development of document automation systems leveraging models such as BART, LayoutLM, and Agentic AI workflows. Architect and optimize data lakes, ETL pipelines, and analytics dashboards using Databricks, PySpark, Kibana, and MLOps tools. Build centralized search engines using ElasticSearch, Solr, and Neo4j for intelligent content discovery and sentiment analysis. Ensure application and ML pipeline security with tools like OWASP ZAP, SonarQube, WebInspect, and container security tools. Collaborate with InfoSec and DevOps teams to maintain CI/CD pipelines, perform vulnerability analysis, and ensure compliance. Guide modernization initiatives across app stacks and coordinate BCDR-compliant infrastructures for mission-critical services. Provide technical leadership and mentoring to engineering teams during all phases of the SDLC. Required Skills & Qualifications 12+ years of total experience, with extensive tenure as a Solution Architect in AI and cloud-driven transformations. Hands-on experience with: Generative AI, LLMs, Prompt Engineering, LangChain, AutoGen, Vertex AI, AWS Bedrock Python, Java (Spring Boot, Spring AI), PyTorch Vector & Graph Databases: ElasticSearch, Solr, Neo4j Cloud Platforms: AWS, Azure, GCP (CAF, serverless, containerization) DevSecOps: SonarQube, OWASP, oAuth2, container security Strong background in application modernization, cloud-native architecture, and MLOps orchestration. Familiarity with front-end technologies: HTML, JavaScript, Angular, JQuery. Certifications Google Professional Cloud Architect AWS Solution Architect Associate Cisco Certified Design Associate (CCDA) Cisco Certified Network Associate (CCNA) Cisco Security Ninja Green Belt Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins Data Science Specialization on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or OReilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.

Posted 1 week ago

Apply

3.0 - 5.0 years

37 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

: Job TitleSenior Data Science Engineer Lead LocationBangalore, India Role Description We are seeking a seasoned Data Science Engineer to spearhead the development of intelligent, autonomous AI systems. The ideal candidate will have a robust background in agentic AI, LLMs, SLMs, Vector DB, and knowledge graphs. This role involves designing and deploying AI solutions that leverage Retrieval-Augmented Generation (RAG), multi-agent frameworks, and hybrid search techniques to enhance enterprise applications. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design & Develop Agentic AI ApplicationsUtilize frameworks like LangChain, CrewAI, and AutoGen to build autonomous agents capable of complex task execution. Implement RAG PipelinesIntegrate LLMs with vector databases (e.g., Milvus, FAISS) and knowledge graphs (e.g., Neo4j) to create dynamic, context-aware retrieval systems. Fine-Tune Language ModelsCustomize LLMs and SLMs using domain-specific data to improve performance and relevance in specialized applications. NER ModelsTrain OCR and NLP leveraged models to parse domain-specific details from documents (e.g., DocAI, Azure AI DIS, AWS IDP) Develop Knowledge GraphsConstruct and manage knowledge graphs to represent and query complex relationships within data, enhancing AI interpretability and reasoning. Collaborate Cross-FunctionallyWork with data engineers, ML researchers, and product teams to align AI solutions with business objectives and technical requirements. Optimize AI WorkflowsEmploy MLOps practices to ensure scalable, maintainable, and efficient AI model deployment and monitoring Your skills and experience 15+ years of professional experience in AI/ML development, with a focus on agentic AI systems. Proficient in Python, Python API frameworks, SQL and familiar with AI/ML frameworks such as TensorFlow or PyTorch. Experience in deploying AI models on cloud platforms (e.g., GCP, AWS). Experience with LLMs (e.g., GPT-4), SLMs, and prompt engineering. Understanding of semantic technologies, ontologies, and RDF/SPARQL. Familiarity with MLOps tools and practices for continuous integration and deployment. Skilled in building and querying knowledge graphs using tools like Neo4j Hands-on experience with vector databases and embedding techniques. Familiarity with RAG architectures and hybrid search methodologies. Experience in developing AI solutions for specific industries such as healthcare, finance, or ecommerce. Strong problem-solving abilities and analytical thinking. Excellent communication skills for crossfunctional collaboration. Ability to work independently and manage multiple projects simultaneously How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs

Posted 1 week ago

Apply

5.0 - 8.0 years

2 - 6 Lacs

Mumbai

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Indi

Work from Office

Naukri logo

- Proficient in Python programming. - Experience with Neo4j for graph database management and querying. - Knowledge of cloud platforms including AWS, Azure, and GCP. - Familiarity with Postgres and Clickhouse for database management and optimization. - Understanding of serverless architecture for building and deploying applications. - Experience with Docker for containerization and deployment. Role & responsibilities Preferred candidate profile

Posted 1 week ago

Apply

17.0 - 19.0 years

0 Lacs

Andhra Pradesh

On-site

Software Engineering Associate Director - HIH - Evernorth. About Evernorth Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Position Summary: The Software Development Associate Director provides hands on leadership, management, and thought leadership for a Delivery organization enabling Cigna's Technology teams. This individual will lead a team based in our Hyderabad Innovation Hub to deliver innovative solutions supporting multiple business and technology domains within Cigna. This includes the Sales & Underwriting, Producer, Service Operations, and Pharmacy business lines, as well as testing and DevOps enablement. The focus of the team is to build innovative go-to-market solutions enabling business while modernizing our existing asset base to support business growth. The Technology strategy is aligned to our business strategy and the candidate will not only be able to influence technology direction but also establishing our team through recruiting and mentoring employees and vendor resources. This is a hands-on position with visibility to the highest levels of the Cigna Technology team. This leader will focus on enabling innovation using the latest technologies and development techniques. This role will foster rapidly building out a scalable delivery organization that aligns with all areas within the Technology team. The ideal candidate will be able to attract and develop talent in a highly dynamic environment. Job Description & Responsibilities: Provide leadership, vision, and design direction for the quality and development of the US Medical and Health Services Technology teams based at the Hyderabad Innovation Hub (HIH). Work in close coordination with leaders and teams based in the United States, as well as contractors employed by the US Medical and Health Services Technology team who are based both within and outside of the United States, to deliver products and capabilities in support of Cigna's business lines. Provide leadership to HIH leaders and teams ensuring the team is meeting the following objectives: Design, configuration, implementation application design/development, and quality engineering within the supported technologies and products. Hands-on people manager who has experience leading agile teams of highly talented technology professionals developing large solutions and internal facing applications. They are expected to work closely with developers, quality engineers, technical project managers, principal engineers, and business stakeholders to ensure that application solutions meet business/customer requirements. A servant leader mentality and a history of creating an inclusive environment, fostering diverse views and approaches from the team, and coaching and mentoring them to thrive in a dynamic workplace. A history of embracing and incubating emerging technology and open-source products. A passion for building highly resilient, scalable, and available platforms, rich reusable foundational capabilities and seamless developer experience while focusing on strategic vision and technology roadmap delivery in an MVP / iterative fast paced approach. Accountable for driving towards timely decisions while influencing across engineering and development delivery teams to drive towards meeting project timelines while balancing destination state. Ensure engineering solutions align with the Technology strategy and that they support the application’s requirements. Plan and implement procedures that will maximize engineering and operating efficiency for application integration technologies. Identify and drive process improvement opportunities. Proactive monitoring and management design of supported assets assuring performance, availability, security, and capacity. Maximize the efficiency (operational, performance, and cost) of the application assets. Experience Required: 17 to 19 years of IT and business/industry or equivalent experience preferred, with at least 5 years of experience in a leadership role with responsibility for the delivery of large-scale projects and programs. Leadership, cross-cultural communication, and familiarity with wide range of technologies and stakeholders. Strong Emotional Intelligence with the ability to foster collaboration across geographically dispersed teams. Experience Desired: Recognized leader with proven track record of delivering software engineering initiatives and cross-IT/business initiatives. Proven experience leading/managing technical teams with a passion for developing talent within the team. Experience with vendor management in an onshore/offshore model. Experience in Healthcare, Pharmacy and/or Underwriting systems. Experience with AWS. Education and Training Required: B.S. degree in Computer Science, Information Systems, or other related degrees; Industry certifications such as AWS Solution Architect, PMP, Scrum Master, or Six Sigma Green Belt are also ideal. Primary Skills: Familiarity with most of the following Application Development technologies: Python, RESTful services, React, Angular, Postgres, and MySQL (relational database management systems). Familiarity with most of the following Data Engineering technologies: Databricks, Spark, PySpark, SQL, Teradata, and multi-cloud environments. Familiarity with most of the following Cloud and Emerging technologies: AWS, LLMs (OpenAI, Anthropic), vector databases (Pinecone, Milvus), graph databases (Neo4j, JanusGraph, Neptune), prompt engineering, and fine-tuning AI models. Familiarity with enterprise software development lifecycle to include production reviews and ticket resolution, navigating freeze/stability periods effectively, total cost of ownership reporting, and updating applications to align with evolving security and cloud standards. Familiarity with agile methodology including SCRUM team leadership or Scaled Agile (SAFE). Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Deep people and matrix management skills, with a heavy emphasis on coaching and mentoring of less senior staff, and a strong ability to influence VP level leaders. Proven ability to resolve issues and mitigate risks that could undermine the delivery of critical initiatives. Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong influencing/negotiation skills. Strong interpersonal/relationship management skills. Strong time and project management skills. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Overview: We are looking for a hands-on, full-cycle AI/ML Engineer who will play a central role in developing a cutting-edge AI agent platform. This platform is designed to automate and optimize complex workflows by leveraging large language models (LLMs), retrieval-augmented generation (RAG), knowledge graphs, and agent orchestration frameworks. As the AI/ML Engineer, you will be responsible for building intelligent agents from the ground up — including prompt design, retrieval pipelines, fine-tuning models, and deploying them in a secure, scalable cloud environment. You’ll also implement caching strategies, handle backend integration, and prototype user interfaces for internal and client testing. This role requires deep technical skills, autonomy, and a passion for bringing applied AI solutions into real-world use. Key Responsibilities: Design and implement modular AI agents using large language models (LLMs) to automate and optimize a variety of complex workflows Deploy and maintain end-to-end agent/AI workflows and services in cloud environments, ensuring reliability, scalability, and low-latency performance for production use Build and orchestrate multi-agent systems using frameworks like LangGraph or CrewAI, supporting context-aware, multi-step reasoning and task execution Develop and optimize retrieval-augmented generation (RAG) pipelines using vector databases (e.g., Qdrant, Pinecone, FAISS) to power semantic search and intelligent document workflows Fine-tune LLMs using frameworks such as Hugging Face Transformers, LoRA/PEFT, DeepSpeed, or Accelerate to create domain-adapted models Integrate knowledge graphs (e.g., Neo4j, AWS Neptune) into agent pipelines for context enhancement, reasoning, and relationship modeling Implement cache-augmented generation strategies using semantic caching and tools like Redis or vector similarity to reduce latency and improve consistency Build scalable backend services using FastAPI or Flask and develop lightweight user interfaces or prototypes with tools like Streamlit, Gradio, or React Monitor and evaluate model and agent performance using prompt testing, feedback loops, observability tools, and safe AI practices Collaborate with architects, product managers, and other developers to translate problem statements into scalable, reliable, and explainable AI systems Stay updated on the latest in cloud platforms (AWS/GCP/Azure), software frameworks, agentic frameworks, and AI/ML technologies Prerequisites: Strong Python development skills, including API development and service integration Experience with LLM APIs (OpenAI, Anthropic, Hugging Face), agent frameworks (LangChain, LangGraph, CrewAI), and prompt engineering Experience deploying AI-powered applications using Docker, cloud infrastructure (Azure preferred), and managing inference endpoints, vector DBs, and knowledge graph integrations in a live production setting Proven experience with RAG pipelines and vector databases (Qdrant, Pinecone, FAISS) Hands-on experience fine-tuning LLMs using PyTorch, Hugging Face Transformers, and optionally TensorFlow, with knowledge of LoRA, PEFT, or distributed training tools like DeepSpeed Familiarity with knowledge graphs and graph databases such as Neo4j or AWS Neptune, including schema design and Cypher/Gremlin querying Basic frontend prototyping skills using Streamlit or Gradio, and ability to work with frontend teams if needed Working knowledge of MLOps practices (e.g., MLflow, Weights & Biases), containerization (Docker), Git, and CI/CD workflows Cloud deployment experience with Azure, AWS, or GCP environments Understanding of caching strategies, embedding-based similarity, and response optimization through semantic caching Preferred Qualifications: Bachelor’s degree in Technology (B.Tech) or Master of Computer Applications (MCA) is required; MS in similar field preferred 7–10 years of experience in AI/ML, with at least 2 years focused on large language models, applied NLP, or agent-based systems Demonstrated ability to build and ship real-world AI-powered applications or platforms, preferably involving agents or LLM-centric workflows Strong analytical, problem-solving, and communication skills Ability to work independently in a fast-moving, collaborative, and cross-functional environment Prior experience in startups, innovation labs, or consulting firms a plus Compensation: The compensation structurewill be discussed during the interview Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Key Responsibilities Graph Database Development: Design, develop, and maintain graph database schemas using Neo4j. Query Optimization: Optimize Neo4j queries for performance and efficiency. Data Processing & Analysis: Utilize Python, PySpark, or Spark SQL for data transformation and analysis. User Acceptance Testing (UAT): Conduct UAT to ensure data accuracy and overall system functionality. Data Pipeline Management: Develop and manage scalable data pipelines using Databricks and Azure Data Factory (ADF). Cloud Integration: Work with Azure cloud services and be familiar with Azure data engineering components. Desired Skills Strong experience with Neo4j and Cypher query language Proficient in Python and/or PySpark Hands-on experience with Databricks and Azure Data Factory Familiarity with data engineering tools and best practices Good understanding of database performance tuning Ability to work in fast-paced, client-driven environments Skills: azure,data engineering tools,neo4j,pyspark,azure data factory,spark sql,databricks,cloud,database performance tuning,cypher query language,python Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Company Description NodeStar is a pioneering AI technology company that specializes in developing cutting-edge conversational AI applications. Our diverse team comprises visionary tech founders, seasoned executives, AI PhDs, and product pioneers who have forged a new path in AI innovation. We create integrated solutions that propel our partners to new heights by infusing conversational interfaces with game mechanics for interactive experiences across multiple platforms. Role Description This is a full-time role for a Senior/Staff Python Backend Developer at NodeStar. As a senior technical leader, you will architect and build scalable backend systems that power our AI-driven applications. You will lead complex technical initiatives, mentor engineering teams, and drive architectural decisions that shape our platform's future. This role requires deep expertise in Python, distributed systems, and cloud infrastructure, combined with the ability to translate business requirements into robust technical solutions that scale to millions of users. Core Responsibilities Architect and design large-scale distributed systems and microservices architecture Lead technical initiatives across multiple teams and drive engineering excellence Define technical roadmaps and architectural standards for backend systems Mentor and guide junior and mid-level developers, fostering their professional growth Own end-to-end delivery of complex features from design to production deployment Drive technical decision-making and evaluate new technologies for adoption Collaborate with product, AI/ML teams, and stakeholders to align technical solutions with business goals Establish best practices for code quality, testing, deployment, and monitoring Lead performance optimization initiatives and ensure system reliability at scale Participate in on-call rotations and incident response for critical systems Qualifications Bachelor's degree in Computer Science or related field (Master's preferred) 5+ years of professional backend development experience, with 2+ years in senior/lead roles Expert-level proficiency in Python and deep understanding of its internals Extensive experience with FastAPI, Django, and async Python frameworks Proven track record of designing and implementing distributed systems at scale Strong expertise in database design, optimization, and management (PostgreSQL, Redis) Deep knowledge of AWS services (EKS, RDS, Lambda, SQS, etc.) and cloud architecture patterns Experience with microservices, event-driven architecture, and message queuing systems Expertise in API design, GraphQL, and RESTful services Strong understanding of software security best practices and compliance requirements Excellent communication skills and ability to influence technical decisions Preferred Qualifications Experience building AI/ML-powered applications and working with LLMs Expertise with container orchestration (Kubernetes) and infrastructure as code (Terraform) Experience with streaming data platforms and real-time processing Knowledge of LangChain, LangGraph, and modern AI application frameworks Experience with vector and graph databases in production environments Track record of leading successful migrations or major architectural changes Published articles, conference talks, or open-source contributions Experience in high-growth startups or AI-focused companies Technical Stack Languages: Python 3.x (expert level), with knowledge of Go or Rust a plus Frameworks: FastAPI, LangGraph, Django REST framework, Celery AI/ML: LangChain, Pydantic, experience with LLM integration Databases: PostgreSQL, Redis, Chroma, Neo4j, experience with sharding and replication Infrastructure: AWS (extensive), Docker, Kubernetes, Terraform Monitoring: DataDog, Prometheus, ELK stack or similar Architecture: Microservices, event-driven systems, CQRS, domain-driven design What We Offer Competitive salary Professional development opportunities Flexible work arrangements Collaborative and innovative work environment Paid time off and holidays Potential for equity We value skill and experience over tenure. If you have less than 5 years of experience but are passionate about backend development and have a proven track record of success, we encourage you to apply and be part of our innovative and dynamic team at NodeStar! Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

35 - 42 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: * Design and implement AI solutions using GML, Neo4J, ArangoDB, LangChain, LLamaIndex, RAG. * Collaborate with cross-functional teams on ML projects with Python, PySpark, Pytorch, R, SQL.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Job Profile Summary The Cloud NoSQL Database Engineer performs database engineering and administration activities, including design, planning, configuration, monitoring, automation, self-serviceability, alerting, and space management. The role involves database backup and recovery, performance tuning, security management, and migration strategies. The ideal candidate will lead and advise on Neo4j and MongoDB database solutions, including migration, modernization, and optimization, while also supporting secondary RDBMS platforms (SQL Server, PostgreSQL, MySQL, Oracle). The candidate should be proficient in workload migrations to Cloud (AWS/Azure/GCP). Key Responsibilities: MongoDB Administration: Install, configure, and maintain Neo4j (GraphDB) and MongoDB (NoSQL) databases in cloud and on-prem environments. NoSQL Data Modeling: Design and implement graph-based models in Neo4j and document-based models in MongoDB to optimize data retrieval and relationships. Performance Tuning & Optimization: Monitor and tune databases for query performance, indexing strategies, and replication performance. Backup, Restore, & Disaster Recovery: Design and implement backup and recovery strategies for Neo4j, MongoDB, and secondary database platforms. Migration & Modernization: Lead database migration strategies, including homogeneous and heterogeneous migrations between NoSQL, Graph, and RDBMS platforms. Capacity Planning: Forecast database growth and plan for scalability, optimal performance, and infrastructure requirements. Patch Management & Upgrades: Plan and execute database software upgrades, patches, and service packs. Monitoring & Alerting: Set up proactive monitoring and alerting for database health, performance, and potential failures using Datadog, AWS CloudWatch, Azure Monitor, or Prometheus. Automation & Scripting: Develop automation scripts using Python, AWS CLI, PowerShell, Shell scripting to streamline database operations. Security & Compliance: Implement database security best practices, including access controls, encryption, key management, and compliance with cloud security standards. Incident & Problem Management: Work within ITIL frameworks to resolve incidents, service requests, and perform root cause analysis for problem management. High Availability & Scalability: Design and manage Neo4j clustering, MongoDB replication/sharding, and HADR configurations across cloud and hybrid environments. Vendor & Third-Party Tool Management: Evaluate, implement, and manage third-party tools for Neo4j, MongoDB, and cloud database solutions. Cross-Platform Database Support: Provide secondary support for SQL Server (Always On, Replication, Log Shipping), PostgreSQL (Streaming Replication, Partitioning), MySQL (InnoDB Cluster, Master-Slave Replication), and Oracle (RAC, Data Guard, GoldenGate). Cloud Platform Expertise: Hands-on with cloud-native database services such as AWS DocumentDB, DynamoDB, Azure CosmosDB, Google Firestore, Google BigTable. Cost Optimization: Analyze database workload, optimize cloud costs, and recommend licensing enhancements. Shape Knowledge & Skills: Strong expertise in Neo4j (Cypher Query Language, APOC, Graph Algorithms, GDS Library) and MongoDB (Aggregation Framework, Sharding, Replication, Indexing). Experience with homogeneous and heterogeneous database migrations (NoSQL-to-NoSQL, Graph-to-RDBMS, RDBMS-to-NoSQL). Familiarity with database monitoring tools such as Datadog, Prometheus, CloudWatch, Azure Monitor. Proficiency in automation using Python, AWS CLI, PowerShell, Bash/Shell scripting. Experience in cloud-based database deployment using AWS RDS, Aurora, DynamoDB, Azure SQL, Azure CosmosDB, GCP Cloud SQL, Firebase, BigTable. Understanding of microservices and event-driven architectures, integrating MongoDB and Neo4j with applications using Kafka, RabbitMQ, or AWS SNS/SQS. Experience with containerization (Docker, Kubernetes) and Infrastructure as Code (Terraform, CloudFormation, Ansible). Strong analytical and problem-solving skills for database performance tuning and optimization. Shape Education & Certifications: Bachelor’s degree in Computer Science, Information Systems, or a related field. Database Specialty Certifications in Neo4j and MongoDB (Neo4j Certified Professional, MongoDB Associate/Professional Certification). Cloud Certifications (AWS Certified Database - Specialty, Azure Database Administrator Associate, Google Cloud Professional Data Engineer). Preferred Experience: 5+ years of experience in database administration with at least 3 years dedicated to Neo4j and MongoDB. Hands-on experience with GraphDB & NoSQL architecture and migrations. Experience working in DevOps environments and automated CI/CD pipelines for database deployments. Strong expertise in data replication, ETL, and database migration tools such as AWS DMS, Azure DMS, MongoDB Atlas Live Migrate, Neo4j ETL Tool. Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Gurugram

Work from Office

Naukri logo

ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it , our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage an d passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do We are looking for experienced Knowledge Graph developers who have the following set of technical skillsets and experience. Undertake complete ownership in accomplishing activities and assigned responsibilities across all phases of project lifecycle to solve business problems across one or more client engagements. Apply appropriate development methodologies (e.g.agile, waterfall) and best practices (e.g.mid-development client reviews, embedded QA procedures, unit testing) to ensure successful and timely completion of assignments. Collaborate with other team members to leverage expertise and ensure seamless transitions; Exhibit flexibility in undertaking new and challenging problems and demonstrate excellent task management. Assist in creating project outputs such as business case development, solution vision and design, user requirements, prototypes, and technical architecture (if needed), test cases, and operations management. Bring transparency in driving assigned tasks to completion and report accurate status. Bring Consulting mindset in problem solving, innovation by leveraging technical and business knowledge/ expertise and collaborate across other teams. Assist senior team members, delivery leads in project management responsibilities. Build complex solutions using Programing languages, ETL service platform, etc. What you’ll bring Bachelor’s or master’s degree in computer science, Engineering, or a related field. 4+ years of professional experience in Knowledge Graph development in Neo4j or AWS Neptune or Anzo knowledge graph Database. 3+ years of experience in RDF ontologies, Data modelling & ontology development Strong expertise in python, pyspark, SQL Strong ability to identify data anomalies, design data validation rules, and perform data cleanup to ensure high-quality data. Project management and task planning experience, ensuring smooth execution of deliverables and timelines. Strong communication and interpersonal skills to collaborate with both technical and non-technical teams. Experience with automation testing Performance OptimizationKnowledge of techniques to optimize knowledge graph operations like data inserts. Data ModelingProficiency in designing effective data models within Knowledge Graph, including relationships between tables and optimizing data for reporting. Motivation and willingness to learn new tools and technologies as per the team’s requirements. Additional Skills: Strong communication skills, both verbal and written, with the ability to structure thoughts logically during discussions and presentations Experience in pharma or life sciences dataFamiliarity with pharmaceutical datasets, including product, patient, or healthcare provider data, is a plus. Experience in manufacturing data is a plus Capability to simplify complex concepts into easily understandable frameworks and presentations Proficiency in working within a virtual global team environment, contributing to the timely delivery of multiple projects Travel to other offices as required to collaborate with clients and internal project teams Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

We are seeking an experienced MERN Stack Trainer to design, develop, and deliver instructor-led and hands-on training programs covering the full MERN (MongoDB, Express.js, React, Node.js) technology stack. The ideal candidate will possess strong software-architecture knowledge, be well-versed in backend management and design patterns, and be capable of guiding students through both core and advanced topics such as asynchronous programming, database design, scalability, reliability, and maintainability. This role requires designing curriculum, creating lab exercises, evaluating student progress, and continuously refining content to align with industry best practices. Key Responsiblities. Curriculum Design & Development: Define learning objectives, course outlines, and module breakdowns for MERN stack topics. Training Delivery & Facilitation Conduct live instructor-led sessions (onsite/virtual) adhering to learning principles. Facilitate hands-on labs where participants build real-world projects (e.g., e-commerce site, chat application, CRUD apps). Demonstrate step-by-step development, debugging, and deployment workflows. Assign and review practical exercises; provide detailed feedback and remediation for struggling learners. Mentor participants on best practices, troubleshooting, and performance optimization. Assessment & Evaluation Design quizzes, coding challenges, and project assessments that rigorously test conceptual understanding and practical skills. Track participant progress (attendance, lab completion, assessment scores) and prepare weekly status reports. Provide certification guidance and mock interview sessions for MERN-related roles. Continuously collect participant feedback to refine content and delivery style. Content Maintenance & Continuous Improvement Stay up-to-date with the latest MERN ecosystem developments: new Node.js features, React releases, database enhancements, DevOps tooling. Regularly revise training materials to incorporate emerging technologies (e.g., serverless functions, Next.js, GraphQL, TypeScript). Collaborate with instructional designers, subject-matter experts, and other trainers to ensure consistency and quality across programs. Required Qualifications Educational Background Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a closely related field. Professional Experience Minimum 3 years of hands-on training experience for designing and building full-stack applications using the MERN stack (Node.js, Express.js, React.js, MongoDB). Preferred 3 years of formal training or mentoring experience in a classroom (physical/virtual) environment, preferably to engineering students or early-career software engineers. Technical Proficiency (must demonstrate strong expertise in all of the following): Node.js & Express.js : building RESTful services, middleware patterns, debugging, error handling, performance tuning. MongoDB : schema design, indexing, aggregation pipelines, replication, and sharding. React.js : component architecture, hooks, state management (Redux or equivalent), React Router, testing frameworks (Jest, React Testing Library). Frontend Technologies : HTML5 semantics, CSS3 (Flexbox, Grid, responsive design, Sass/LESS), Bootstrap, JavaScript (ES6+), jQuery fundamentals. Database Administration : proficiency in at least one relational database (PostgreSQL or MariaDB) and one NoSQL/document database (MongoDB). Familiarity with Redis (caching/real-time sessions), Neo4j, and InfluxDB (optional). Software Architecture & Design Patterns : SOLID principles, MVC/MVVM, event-driven patterns, microservices vs. monolith trade-offs. DevOps & Tooling : Git/GitHub workflows, containerization basics (Docker), basic cloud deployment. Testing & Quality : unit testing (Mocha/Chai, Jest), integration testing (Supertest), basic performance testing (JMeter), code linting (ESLint), code coverage. Soft Skills Excellent verbal and written communication skills; ability to explain complex concepts in a simplified and structured manner. Proven classroom management and facilitation skills; adaptable to diverse learner backgrounds. Strong problem-solving aptitude and the ability to perform live troubleshooting during sessions. Demonstrated organizational skills: ability to manage multiple batches, track progress, and ensure timely delivery of content. High degree of professionalism, punctuality, and ownership. Show more Show less

Posted 1 week ago

Apply

40.0 years

0 Lacs

Hyderābād

On-site

India - Hyderabad JOB ID: R-213724 ADDITIONAL LOCATIONS: India - Hyderabad WORK LOCATION TYPE: On Site DATE POSTED: Jun. 03, 2025 CATEGORY: Information Systems ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 1 week ago

Apply

30.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Position Overview ABOUT APOLLO Apollo is a high-growth, global alternative asset manager. In our asset management business, we seek to provide our clients excess return at every point along the risk-reward spectrum from investment grade to private equity with a focus on three investing strategies: yield, hybrid, and equity. For more than three decades, our investing expertise across our fully integrated platform has served the financial return needs of our clients and provided businesses with innovative capital solutions for growth. Through Athene, our retirement services business, we specialize in helping clients achieve financial security by providing a suite of retirement savings products and acting as a solutions provider to institutions. Our patient, creative, and knowledgeable approach to investing aligns our clients, businesses we invest in, our employees, and the communities we impact, to expand opportunity and achieve positive outcomes. OUR PURPOSE AND CORE VALUES Our Clients Rely On Our Investment Acumen To Help Secure Their Future. We Must Never Lose Our Focus And Determination To Be The Best Investors And Most Trusted Partners On Their Behalf. We Strive To Be The leading provider of retirement income solutions to institutions, companies, and individuals. The leading provider of capital solutions to companies. Our breadth and scale enable us to deliver capital for even the largest projects – and our small firm mindset ensures we will be a thoughtful and dedicated partner to these organizations. We are committed to helping them build stronger businesses. A leading contributor to addressing some of the biggest issues facing the world today – such as energy transition, accelerating the adoption of new technologies, and social impact – where innovative approaches to investing can make a positive difference. We are building a unique firm of extraordinary colleagues who: Outperform expectations. Challenge Convention Champion Opportunity Lead responsibly. Drive collaboration As One Apollo team, we believe that doing great work and having fun go hand in hand, and we are proud of what we can achieve together. Our Benefits Apollo relies on its people to keep it a leader in alternative investment management, and the firm’s benefit programs are crafted to offer meaningful coverage for both you and your family. Please reach out to your Human Capital Business Partner for more detailed information on specific benefits. Position Overview At Apollo, we are a global team of alternative investment managers passionate about delivering uncommon value to our investors and shareholders. With over 30 years of proven expertise across Private Equity, Credit, and Real Assets in various regions and industries, we are known for our integrated businesses, our strong investment performance, our value-oriented philosophy, and our people. We seek a Senior Engineer/Full Stack Developer to innovate, manage, direct, architect, design, and implement solutions focused on our trade operations and controller functions across Private Equity, Credit, and Real Assets. The ideal candidate is a well-rounded hands-on engineer passionate about delivering quality software on the Java stack. Our Senior Engineer will work closely with key stakeholders in our Middle Office and Controllers teams and in the Credit and Opportunistic Technology teams to successfully deliver business requirements, projects, and programs. The candidate will have proven skills in independently managing the full software development lifecycle, working with end-users, business analysts, and project managers in defining and refining the problem statement, and delivering quality solutions on time. They will have the aptitude to quickly learn and embrace emerging technologies and proven methodologies to innovate and improve the correctness, quality, and timeliness of solutions delivered by the team. Primary Responsibilities Contribute to development of elegant solutions for systems that result in simple, extensible, maintainable, high-quality code. Participate in design discussions, hands-on technical, development, code reviews, quality assurance, observability, and product support. Use technical knowledge of patterns and code to identify risks and prevent software defects. Foster a culture of collaboration, disciplined software engineering practices, and a mindset to leave things better than you found them. Optimize team processes to improve productivity and responsiveness to feedback and changing priorities. Build strong relationships with key stakeholders, collaborate, and communicate effectively to reach successful outcomes. Passionate about delivering high-impact and breakthrough value to stakeholders. Desire to learn the domain and deliver enterprise solutions with at a higher velocity. Contribute to deliverables from early stages of requirement gathering through development, testing, UAT, deployment and post-production Lead in the planning, execution, and delivery of the team’s commitments. Qualifications & Experience Qualifications & Experience: Master’s or bachelor’s degree in Computer Science or another STEM field Experience with software development in the Alternative Asset Management or Investment Banking domain 5+ years of software development experience in at least one of the following OO languages: Java, C++, or C# 3+ years of Web 2.0 UI/UX development experience in at least one of the following frameworks using JavaScript/TypeScript: ExtJS, ReactJS, AngularJS, or Vue. Hands-on development expertise in Java, Spring Boot, REST, Messaging, JPA, and SQL for the last 2+ years Hands-on development expertise in building applications using RESTful and Microservices architecture Expertise in developing applications using TDD/BDD/ATDD with hands-on experience with at least one of Junit, Spring Test, TestNG, or Cucumber A strong understanding of SOLID principles, Design Patterns, Enterprise Integration Patterns A strong understanding of relational databases, SQL, ER modeling, and ORM technologies A strong understanding of BPM and its application Hands-on experience with various CI/CD practices and tools such as Jenkins, Azure DevOps, TeamCity, etcetera Exceptional problem-solving & debugging skills. Awareness of emerging application development methodologies, design patterns, and technologies. Ability to quickly learn new and emerging technologies and adopt solutions from within the company or the open-source community. Experience with the below will be a plus Buy-side operational and fund accounting processes Business processes and workflows using modern BPM/Low Code/No Code platforms (JBPM, Bonitasoft, Appian, Logic Apps, Unqork, etcetera…) OpenAPI, GraphQL, gRPC, ESB, SOAP, WCF, Kafka, and Node Serverless architecture Microsoft Azure Designing and implementing microservices on AKS Azure DevOps Sencha platform NoSQL databases (MongoDB, Cosmos DB, Neo4J) Python software development Functional programming paradigm Apollo provides equal employment opportunities regardless of age, disability, gender reassignment, marital or civil partner status, pregnancy or maternity, race, color, nationality, ethnic or national origin, religion or belief, veteran status, gender/sex or sexual orientation, or any other criterion or circumstance protected by applicable law, ordinance, or regulation. The above criteria are intended to be used as a guide only – candidates who do not meet all the above criteria may still be considered if they are deemed to have relevant experience/ equivalent levels of skill or knowledge to fulfil the requirements of the role. Any job offer will be conditional upon and subject to satisfactory reference and background screening checks, all necessary corporate and regulatory approvals or certifications as required from time to time and entering into definitive contractual documentation satisfactory to Apollo. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

As the global leader in high-speed connectivity, Ciena is committed to a people-first approach. Our teams enjoy a culture focused on prioritizing a flexible work environment that empowers individual growth, well-being, and belonging. We’re a technology company that leads with our humanity—driving our business priorities alongside meaningful social, community, and societal impact. We believe in the power of people. We are a network strategy and technology company that is motivated by making a difference in people’s lives – their productivity, creativity, health and comfort. We’re looking for a highly motivated, talented and experienced engineer who is passionate about product verification automation activities and is ready to assume a leadership position within the team in addressing future projects. You will certify solutions that provide our customers opportunities to differentiate their service offerings in a very competitive market. The ideal candidate is a flexible, highly technical problem solver, with interdisciplinary knowledge of software, and test & test automation. You feel at home in a dynamic, multi-disciplined engineering environment, acting as an interface between product design, other Blue Planet test engineering teams, and members of other functional groups (support, documentation, marketing, etc). RESPONSIBILITIES Engage with various engineering teams, product line managers and product owners to transform concepts and high-level requirements into optimized test coverage and enhanced customer experience. Automate and maintain all manually devised and executed test cases using automation best practices and maintain CI/CD pipeline framework Coding E2E Automated tests for the Angular UI frontend with Cucumber/Webdriver.io. Coding Rest API testing automation Coding of System testing with ansible, bash scripting Drive (plan and implement) lab or simulation environment setup activities to fully address proposed testing scenarios and coordinate equipment acquisition/sharing agreements with the various teams concerned. Analyse test results and prepare test reports. Investigate software defects and highlight critical issues that can have potential customer impact and consult with software development engineers in finding resolution or to address problems related to specifications and/or test plans/procedures. Raise Agile Jira bugs for product defects Report on automation status Research the best tools/ways of test automation for required functionality Skills Expected from the candidate: Frontend testing frameworks/libraries: Cucumber/Webdriver.io Backend programming/markup languages: Python Backend testing: Rest API testing automation tools, Postman/Newman, Jasmine Load testing: JMeter, Grafana + Prometheus Container management: Docker, Kubernetes, OpenStack Testing Theory: terminology, testing types, asynchronous automated testing Continuous Integration Tools: Jenkins, TeamCity, GitLab Cloud Environments: AWS, Azure, Google cloud Version control system: Git, Bitbucket System Testing Automation with: Bash, Shell, Python, Ansible scripting Hands-on experience of CI/CD pipeline configuration and maintenance Solid operational and administrator experience with Unix operation systems Understanding of Web application and Microservice solution architecture Strong abilities to rapidly learn new complex technological concepts and apply knowledge in daily activities. Excellent written (documentation) and interpersonal communication skills (English). Strong abilities to work as part of a team or independently with little supervision. Experienced working as part of an Agile scrum team and with DevOps process Desirable For The Candidate Ticketing: Jira Documentation: Confluence, Gitlab Frontend programming/markup languages: Typescript/JavaScript, html, CSS, SVG Frontend development frameworks/libraries: Angular 2+, Node.js/npm, D3.js, gulp Programming theory: algorithms and data structures, relational and graph database concepts, etc. Non-critical Extras Domain: Telecom, Computer Networking, OSS Builds: Maven, NPM, JVM, NodeJS Databases: PostgreSQL, Neo4j, ClickHouse Test Management: TestRail Other Skills: ElasticSearch, Drools, Kafka integration, REST (on Spring MVC), SSO (LDAP, Reverse Proxy, OAuth2) Not ready to apply? Join our Talent Community to get relevant job alerts straight to your inbox. At Ciena, we are committed to building and fostering an environment in which our employees feel respected, valued, and heard. Ciena values the diversity of its workforce and respects its employees as individuals. We do not tolerate any form of discrimination. Ciena is an Equal Opportunity Employer, including disability and protected veteran status. If contacted in relation to a job opportunity, please advise Ciena of any accommodation measures you may require. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Tata Consultancy Services is hiring Python Full stack Developers !!! Role**Python Full stack Developer Desired Experience Range**6-8 Years Location of Requirement**Hyderabad Desired Skills -Technical/Behavioral Primary Skill Frontend o 6+ years of overall experience with proficiency in React (2+ years), Typescript (1+ year), React hooks (1+ year) o Experience with ESlint, CSS in JS styling (preferably Emotion), state management (preferably Redux), and JavaScript bundlers such as Webpack o Experience with integrating with RESTful APIs or other web services Backend o Expertise with Python (3+ years, preferably Python3) o Proficiency with a Python web framework (2+ years, preferably flask and FastAPI) o Experience with a Python linter (preferably flake8), graph databases (preferably Neo4j), a package manager (preferably pip), Elasticsearch, and Airflow o Experience with developing microservices, RESTful APIs or other web services o Experience with Database design and management, including NoSQL/RDBMS tradeoffs Interested and Eligible candidates can apply Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled and customer-focused GraphDB / Neo4J Solutions Engineer to join our team. This role is responsible for delivering high-quality solution implementation to our customers to implement GraphDB based product and collaborating with cross-functional teams to ensure customer success. Solution lead is expected to provide in-depth solutions on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. Solution lead must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). Roles And Responsibilities Collaborate with core engineering, Customers and solution engineering teams for functional and technical discovery sessions. Prepare product and live software demonstrations Create and maintain public documentation, internal knowledge base articles, and FAQs. Ability to design efficient graph schemas and develop prototypes that address customer requirements (e.g., Fraud Detection, Recommendation Engines, Knowledge Graphs). Knowledge of indexing strategies, partitioning, and query optimization in GraphDB. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Education and Experience Education: B.Tech in computer engineering, Information Technology, or related field. Experience: 5+ years of experience in a Solution Lead role on Data based Software Product such as GraphDB, Neo4J Must Have Skills SQL Expertise: 4+ years of experience in SQL for database querying, performance tuning, and debugging. Graph Databases and GraphDB platforms: 4+ years of hands on experience with Neo4j, or similar graph database systems. Scripting & Automation: 4+ years with strong skills in C, C++, Python for automation, task management, and issue resolution. Virtualization and Cloud knowledge: 4+ years with Azure, GCP or AWS. Management skills : 3+ years Experience with data requirements gathering and data modeling, white boarding and developing/validating proposed solution architectures. The ability to communicate complex information and concepts to prospective users in a clear and effective way. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Job Description We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles and Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills Education: B.Tech in computer engineering, Information Technology, or related field. Experience: GraphDB experience is must 5+ years of experience in a Technical Support Role p on Data based Software Product at least L3 level. Linux Expertise: 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases: 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise: 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing: 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation: 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration: 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools: Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing: Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies: Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice to have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Company Resources is the backbone of Publicis Groupe, the world’s third-largest communications group. Formed in 1998 as a small team to service a few Publicis Groupe firms, Re:Sources has grown to 5,000+ people servicing a global network of prestigious advertising, public relations, media, healthcare, and digital marketing agencies. We provide technology solutions and business services including finance, accounting, legal, benefits, procurement, tax, real estate, treasury, and risk management to help Publicis Groupe agencies do their best: create and innovate for their clients. In addition to providing essential, everyday services to our agencies, Re:Sources develops and implements platforms, applications, and tools to enhance productivity, encourage collaboration, and enable professional and personal development. We continually transform to keep pace with our ever-changing communications industry and thrive on a spirit of innovation felt around the globe. With our support, Publicis Groupe agencies continue to create and deliver award-winning campaigns for their clients. About the Role The main purpose of this role is to advance the application of business intelligence, advanced data analytics, and machine learning for Marcel. The Data Scientist will work with other data scientists, engineers, and product owners to ensure the delivery of all commitments on time and in high quality. Responsibilities Design and develop advanced data science and machine learning algorithms, with a strong emphasis on Natural Language Processing (NLP) for personalized content, user understanding, and recommendation systems. Work on end-to-end LLM-driven features, including fine-tuning pre-trained models (e.g., BERT, GPT), prompt engineering, vector embeddings, and retrieval-augmented generation (RAG). Build robust models on diverse datasets to solve for semantic similarity, user intent detection, entity recognition, and content summarization/classification. Analyze user behaviour through data and derive actionable insights for platform feature improvements using experimentation (A/B testing, multivariate testing). Architect scalable solutions for deploying and monitoring language models within platform services, ensuring performance and interpretability. Collaborate cross-functionally with engineers, product managers, and designers to translate business needs into NLP/ML solutions. Regularly assess and maintain model accuracy and relevance through evaluation, retraining, and continuous improvement processes. Write clean, well-documented code in notebooks and scripts, following best practices for version control, testing, and deployment. Communicate findings and solutions effectively across stakeholders — from technical peers to executive leadership. Contribute to a culture of innovation and experimentation, continuously exploring new techniques in the rapidly evolving NLP/LLM space. Qualifications Minimum Experience (relevant): 3 years Maximum Experience (relevant): 5 years Required Skills Proficiency in Python and NLP frameworks: spaCy, NLTK, Hugging Face Transformers, OpenAI, LangChain. Strong understanding of LLMs, embedding techniques (e.g., SBERT, FAISS), RAG architecture, prompt engineering, and model evaluation. Experience in text classification, summarization, topic modeling, named entity recognition, and intent detection. Experience deploying ML models in production and working with orchestration tools such as Airflow, MLflow. Comfortable working in cloud environments (Azure preferred) and with tools such as Docker, Kubernetes (AKS), and Git. Strong experience working with data science/ML libraries in Python (SciPy, NumPy, TensorFlow, SciKit-Learn, etc.) Strong experience working in cloud development environments (especially Azure, ADF, PySpark, DataBricks, SQL) Experience building data science models for use on front end, user facing applications, such as recommendation models Experience with REST APIs, JSON, streaming datasets Understanding of Graph data, Neo4j is a plus Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources Understanding of Jenkins, CI/CD processes using Git, for cloud configs and standard code repositories such as ADF configs and Databricks Preferred Skills Bachelor's degree in engineering, computer science, statistics, mathematics, information systems, or a related field from an accredited college or university; Master's degree from an accredited college or university is preferred. Or equivalent work experience. Advanced knowledge of data science techniques, and experience building, maintaining, and documenting models Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases preferably Graph DB. Experience building and optimizing ADF and PySpark based data pipelines, architectures and data sets on Graph and Azure Datalake. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

7 - 16 Lacs

Pune, Bengaluru, Greater Noida

Work from Office

Naukri logo

About the Role We are seeking a skilled and security-conscious Backend Engineer to join our growing engineering team. In this role, you will be responsible for designing, developing, and maintaining secure backend systems and services. Youll work with modern technologies across cloud platforms, graph databases, and containerized environments to build scalable and resilient infrastructure. Key Responsibilities Design and implement backend services and APIs using Python. Manage and query graph data using Neo4j. Work across cloud platforms (AWS, Azure, GCP) to build and deploy secure, scalable applications. Optimize and maintain relational and analytical databases including PostgreSQL and ClickHouse. Develop and deploy serverless applications and microservices. Containerize applications using Docker and manage deployment pipelines. Collaborate with security teams to integrate best practices and tools into the development lifecycle. Mandatory Skills Proficiency in Python programming . Hands-on experience with Neo4j for graph database management and Cypher querying. Working knowledge of AWS , Azure , and Google Cloud Platform (GCP) . Experience with PostgreSQL and ClickHouse for database optimization and management. Understanding of serverless architecture and deployment strategies. Proficiency with Docker for containerization and deployment. Nice to Have Experience with AWS ECS and EKS for container orchestration. Familiarity with open-source vulnerability/secret scanning tools (e.g., Trivy, Gitleaks, etc.). Exposure to CI/CD pipelines and DevSecOps practices. What We Offer Competitive compensation and benefits. Flexible work environment. Opportunities to work on cutting-edge security and cloud technologies. A collaborative and inclusive team culture.

Posted 2 weeks ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Chennai

Work from Office

Naukri logo

Job Title: Senior Consultant - Knowledge Graph and Semantic Engineer career level: D3 Introduction to role: Join AstraZeneca Operations IT, where your work directly impacts patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak, combining powerful science with leading digital technology platforms and data. With a passion for impacting lives through data, analytics, AI, machine learning, and more, we offer a dynamic and challenging environment filled with opportunities to learn and grow. Be part of a team that innovates, disrupts an industry, and changes lives. Accountabilities: As a Senior Consultant in Knowledge Graph and Semantic Engineering, you will design and develop ontologies, semantic models, and property graphs representing key business concepts in manufacturing and supply chain. You will define reusable graph patterns, apply standard methodologies in knowledge graph modeling, and develop SPARQL and Cypher queries for integrating, retrieving, and validating semantic data. You will support the integration of structured and unstructured data using semantic modeling and graph-based approaches. Collaborate with stakeholders to define business-driven semantic use cases and technical requirements, maintain delivery backlogs, oversee implementation plans, and ensure alignment with business priorities. Partner with product owners, data engineers, and architects to drive adoption of semantic technologies across functions. Deliver training, guidance, and documentation to upskill team members and stake \holders on semantic technologies. Stay ahead of on developments in semantic web, property graphs, linked data, and graph-based AI/ML. Essential Skills/Experience: Hands-on experience with Neo4j and Cypher query development. Solid grounding in RDF, OWL, SHACL, SPARQL, and semantic modeling standard methodologies. Strong proficiency in Python (or an equivalent language) for automation, data transformation, and pipeline integration. Demonstrated ability to define use cases, structure delivery backlogs, and manage technical execution. Strong problem-solving and communication skills, with a delivery-focused mindset. Bachelor s degree in Computer Science, Data Science, Information Systems, or a related field (Master s preferred). Desirable Skills/Experience: Experience with additional graph platforms such as GraphDB, Stardog, or Amazon Neptune. Familiarity with Cognite Data Fusion, IoT/industrial data integration, or other large-scale operational data platforms. Understanding of knowledge representation techniques and reasoning systems. Exposure to AI/ML approaches using graphs or semantic features. Knowledge of tools such as Prot g , TopBraid Composer, or VocBench. Familiarity with metadata standards, data governance, and FAIR principles. straZeneca is a place where diverse minds work inclusively to drive change across international boundaries. We couple technology with an inclusive mindset to develop a leading ecosystem. Our cross-functional teams bring together the best minds from across the globe to uncover new solutions. We think holistically about applying technology while building partnerships inside and out. By driving simplicity and efficiencies, we make a real difference. Ready to make an impact? Apply now to join our team! Date Posted 02-Jun-2025 Closing Date 08-Jun-2025

Posted 2 weeks ago

Apply

Exploring Neo4j Jobs in India

Neo4j, a popular graph database management system, is seeing a growing demand in the job market in India. Companies are looking for professionals who are skilled in working with Neo4j to manage and analyze complex relationships in their data. If you are a job seeker interested in Neo4j roles, this article will provide you with valuable insights to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Delhi/NCR

Average Salary Range

The average salary range for Neo4j professionals in India varies based on experience levels. - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

In the Neo4j skill area, a typical career progression may look like: - Junior Developer - Developer - Senior Developer - Tech Lead

Related Skills

Apart from expertise in Neo4j, professionals in this field are often expected to have or develop skills in: - Cypher Query Language - Data modeling - Database management - Java or Python programming

Interview Questions

  • What is a graph database? (basic)
  • Explain the difference between a graph database and a traditional relational database. (basic)
  • How does Neo4j handle relationships between nodes? (medium)
  • What is Cypher Query Language? (basic)
  • Can you give an example of a Cypher query to retrieve all nodes connected to a specific node? (medium)
  • How does Neo4j ensure data consistency in a distributed environment? (advanced)
  • What are the benefits of using Neo4j for social network analysis? (medium)
  • Explain the concept of indexing in Neo4j. (medium)
  • How does Neo4j handle transactions? (medium)
  • Can you explain the concept of graph traversal in Neo4j? (medium)
  • What are some common use cases for Neo4j in real-world applications? (medium)
  • How does Neo4j handle scalability? (advanced)
  • What is the significance of property graphs in Neo4j? (basic)
  • Explain the concept of cardinality in Neo4j. (medium)
  • How can you optimize Neo4j queries for better performance? (medium)
  • What are the key components of a Neo4j graph database? (basic)
  • How does Neo4j support ACID properties? (medium)
  • What are the limitations of Neo4j? (medium)
  • Can you explain the concept of graph algorithms in Neo4j? (medium)
  • How does Neo4j handle data import/export? (medium)
  • Explain the concept of labels and relationship types in Neo4j. (basic)
  • What are the different types of indexes supported by Neo4j? (medium)
  • How does Neo4j handle security and access control? (medium)
  • What are the advantages of using Neo4j over other graph databases? (medium)
  • How can you monitor and troubleshoot performance issues in Neo4j? (medium)

Conclusion

As you explore Neo4j job opportunities in India, it's essential to not only possess the necessary technical skills but also be prepared to showcase your expertise during interviews. Stay updated with the latest trends in Neo4j and continuously enhance your skills to stand out in the competitive job market. Prepare thoroughly, demonstrate your knowledge confidently, and land your dream Neo4j job in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies