Jobs
Interviews

632 Neo4J Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Python Developer – Data Science & AI Integration Location: Chandkheda, Ahmedabad, Gujarat 382424 Experience: 2–3 Years Employment Type: Full-time Work Mode: On-site About the Role We are seeking a talented and driven Python Developer to join our AI & Data Science team. The ideal candidate will have experience in developing backend systems, working with legal datasets, and integrating AI/LLM-based chatbots (text and voice). This is a hands-on role where you’ll work across modern AI architectures like RAG and embedding-based search using vector databases. Key Responsibilities Design and implement Python-based backend systems for AI and data science applications. Analyze legal datasets and derive insights through automation and intelligent algorithms. Build and integrate AI-driven chatbots (text & voice) using LLMs and RAG architecture. Work with vector databases (e.g., Pinecone, ChromaDB) for semantic search and embedding pipelines. Implement graph-based querying systems using Neo4j and Cypher. Collaborate with cross-functional teams (Data Scientists, Backend Engineers, Legal SMEs). Maintain data pipelines for structured, semi-structured, and unstructured data. Ensure code scalability, security, and performance. Required Skills & Experience 2–3 years of hands-on Python development experience in AI/data science environments. Solid understanding of legal data structures and preprocessing. Experience with LLM integrations (OpenAI, Claude, Gemini) and RAG pipelines. Proficiency in vector databases (e.g., Pinecone, ChromaDB) and embedding-based similarity search. Experience with Neo4j and Cypher for graph-based querying. Familiarity with PostgreSQL and REST API design. Strong debugging and performance optimization skills. Nice to Have Exposure to Agile development practices. Familiarity with tools like LangChain or LlamaIndex. Experience working with voice-based assistant/chatbot systems. Bachelor's degree in Computer Science, Data Science, or a related field. Why Join Us? Work on cutting-edge AI integrations in a domain-focused environment. Collaborate with a passionate and experienced cross-functional team. Opportunity to grow in legal-tech and AI solutions space.

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

We’re Hiring: Data Engineer Experience Level: Mid-Level / Senior (based on fit) We’re looking for a skilled and motivated Data Engineer to join our growing team. If you're passionate about designing scalable data infrastructure and love working with cutting-edge tools like Apache Spark, Airflow, and Neo4j, this role is for you. Key Responsibilities: Design, build, and maintain scalable ETL pipelines for diverse data sources Develop and optimize data processing workflows and models for performance and reliability Leverage Apache Spark for distributed data transformations and large-scale processing Schedule and manage data pipelines using Apache Airflow Write efficient, maintainable Python code for data tasks and automation Model and manage graph databases using Neo4j to extract insights from complex data relationships Collaborate with data scientists, analysts, and cross-functional teams to deliver actionable insights Maintain high data quality through testing, validation, and monitoring Troubleshoot pipeline issues and ensure infrastructure reliability Stay current with trends and advancements in data engineering Qualifications: Proven experience as a Data Engineer or similar role Proficiency in Python, including libraries like Pandas and PySpark Strong understanding of ETL processes and tools Hands-on experience with Apache Spark and Airflow Practical knowledge of Neo4j and Cypher for graph-based data modeling Solid understanding of SQL and NoSQL databases Familiarity with cloud platforms like AWS, GCP, or Azure Strong analytical thinking and problem-solving skills Excellent collaboration and communication abilities Bachelor’s degree in Computer Science, Engineering, or a related field

Posted 1 month ago

Apply

5.0 - 8.0 years

3 - 6 Lacs

Hyderābād

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver No Performance Parameter Measure 1 Process No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2 Team Management Productivity, efficiency, absenteeism 3 Capability development Triages completed, Technical Test performance Mandatory Skills: Neo4j Graph Database. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Svelte Developer to build lightweight, reactive web applications with excellent performance and maintainability. Key Responsibilities: Design and implement applications using Svelte and SvelteKit. Build reusable components and libraries for future use. Optimize applications for speed and responsiveness. Collaborate with design and backend teams to create cohesive solutions. Required Skills & Qualifications: 8+ years of experience with Svelte or similar reactive frameworks. Strong understanding of JavaScript, HTML, CSS, and reactive programming concepts. Familiarity with SSR and JAMstack architectures. Experience integrating RESTful APIs or GraphQL endpoints. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 1 month ago

Apply

0 years

0 Lacs

Nashik, Maharashtra, India

Remote

Company: Amonex Technologies Pvt. Ltd. Product: Recaho POS – www.recaho.com Location: [Office Location or Remote] Internship Duration: 6 Months Reporting To: Sales Manager / Marketing Lead About Amonex Technologies Amonex Technologies is a fast-growing SaaS company behind Recaho, an all-in-one restaurant management platform that is transforming how food and beverage businesses operate. With a presence in 18+ countries and over 11,000 customers, Recaho empowers restaurants, cafes, and cloud kitchens with tools for billing, inventory, CRM, and online ordering — all in one platform. At Amonex, we’re not just building software — we’re shaping the future of the global food service industry through innovation, data, and design. Join us on our mission to digitally empower 1 million food businesses across emerging markets. About The Role We are looking for a driven and enthusiastic Sales and Marketing Intern for a 6-month internship. This role is ideal for someone passionate about email marketing, digital outreach, and eager to dive deep into sales operations and customer acquisition strategies. You will work directly with our core sales and marketing teams to execute campaigns, learn tools, and gain real-world exposure to a high-growth SaaS business. Key Responsibilities Assist in planning and executing email marketing campaigns for lead generation and engagement. Work with the sales team to manage leads, update CRM, and optimize conversion workflows. Research and segment databases for targeted outbound communication. Help craft compelling content including email templates, case studies, and sales decks. Track and analyze campaign metrics, identify opportunities for improvement. Collaborate on special projects involving marketing automation, product launches, and field promotions. Key Skills and Interests Passion for email marketing, CRM systems, and customer engagement. Eagerness to learn about sales funnels, marketing automation, and SaaS growth strategies. Strong communication, writing, and organizational skills. Ability to work independently and collaboratively in a fast-paced environment. Prior exposure to tools like HubSpot, Mailchimp, or Zoho is a plus (but not required). What You’ll Learn How a high-growth SaaS startup builds and executes end-to-end sales & marketing funnels. Hands-on experience with email campaigns, lead nurturing, and CRM operations. Understanding of how marketing directly supports sales in driving business growth. Industry-level exposure to the F&B tech landscape, with opportunities to contribute and make impact. About Company: We are a startup founded by ex-Infosys employees based in Pune. We are developing next-generation e-commerce platforms with various flavors, including B2C, B2B, B2B2C, and marketplaces. Our mission is to replace current e-commerce and vertical solutions/platforms using modern-age technologies and frameworks to deliver exceptional performance and user experiences. Technologies we use include Node.js, GraphQL, MongoDB, Neo4j DB, Nginx, Docker, etc.

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

We are looking for a skilled GraphDB + Python Developer to join our team to design, develop, and maintain graph database solutions and integrate them with Python applications. The ideal candidate will have hands-on experience working with graph databases (such as GraphDB, Neo4j, or similar RDF/triple stores) and strong Python programming skills to build scalable, efficient, and robust data-driven applications.

Posted 1 month ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Data Science Intern: AI & Life Sciences (Paid) Location: Pune (Hybrid) Type: Paid Internship (6 months, potential extension/absorption) Company: Dizzaroo – Transforming Drug Discovery Development with AI About Dizzaroo At Dizzaroo, we are building AI-first tools to transform drug discovery development . We explore, unravel, and organize complex biological, clinical, and scientific data to help bring new treatments to patients faster. We value bold ideas, flexibility, and a “no idea is a crazy idea” mindset. Role Overview We are seeking Data Science Interns to join our team in curating and cleaning specialized datasets for AI model fine-tuning in the life sciences domain . You will work with structured data (clinical, biomedical, genomic) and unstructured data (scientific publications, clinical protocols) to build high-quality datasets for our AI workflows supporting drug discovery development. This is a unique opportunity to gain hands-on experience at the intersection of AI, data science, and life sciences while working with cutting-edge tools and graph-based data infrastructure. What You Will Do Curate, clean, and annotate domain-specific datasets , including: Scientific publications, clinical protocols, and regulatory documents for training large language models. Biomedical and genomic data for structured AI pipelines. Use advanced tools and databases (Weaviate, Neo4j, SQL/NoSQL) to organize and manage large-scale, multimodal datasets . Support data pipeline validation and quality checks to ensure clean, structured training data for AI models. Assist with document chunking, metadata tagging, and knowledge graph development to enhance retrieval and structuring of scientific and clinical data. Collaborate with AI engineers and domain experts to align data curation with project goals . What We’re Looking For Background: Pursuing or recently completed a Bachelor's/Master’s in Data Science, Computer Science, Life Sciences, Biomedical Engineering, or related fields. Skills: Strong in at least one domain with working knowledge of the other: If your strength is data science , you should have: Proficiency in Python (libraries like pandas, numpy, pytorch). Exposure to SQL , and ideally to graph/vector databases (Neo4j, Weaviate). Experience with data cleaning, ETL workflows, or text processing (NLP preprocessing) . Curiosity to understand life sciences contexts . If your strength is life sciences , you should have: Knowledge of biomedical or clinical data structures , scientific literature, or genomics. Ability to use Python or spreadsheets for basic data analysis . Interest in applying data science tools to life sciences problems. Mindset: Comfortable with ambiguity and learning complex domain contexts. High attention to detail with a commitment to data quality . Aligns with Dizzaroo’s values of creativity, flexibility, and challenging the status quo. What You Will Gain Exposure to real-world AI model training pipelines using structured and unstructured data in drug discovery development. Experience with advanced data infrastructure and tooling for cutting-edge AI workflows. Opportunity to contribute to impactful projects across knowledge management, multimodal data integration, and computer vision in diagnostics . Potential pathway to full-time opportunities with Dizzaroo based on performance. How to Apply Send your CV and a brief note on why you are interested in this role to kalpeshp@dizzaroo.com with the subject:

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description: We are looking for a highly capable Backend developer to optimize our web-based application performance. Your primary focus will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to requests from the server-side. You will be collaborating with our front-end application developers, designing back-end components, and integrating data storage and protection solutions. Responsibilities: ● Working with the team, collaborating with other engineers, Frontend teams, and product teams to design and build backend applications and services ● Completely own the application features end to end; through design, development, testing, launch, and post-launch support ● Deploy and maintain applications on cloud-hosted platforms. ● Build performant, scalable, secure, and reliable applications. ● Write high-quality, clean, maintainable code and perform peer code reviews. ● Develop backend server code, APIs, and database functionality ● Propose coding standards, tools, frameworks, automation, and processes for the team. ● Lead technical architecture and design for application development ● Work on POCs, try new ideas, influence product road map Skills and Qualifications: ● At least 5+ years of experience in Node.js, MySQL & backend development ● Experience in PHP and NoSQL is preferred ● Exceptional communication, organization, and leadership skills ● Excellent debugging and optimization skills ● Experience designing and developing RESTful APIs ● Expert level with Web Server setup/management with at least one of Nginx, Tomcat including troubleshooting and setup on a cloud environment ● Experience with relational SQL and No SQL databases, familiarity with SQL/No SQL and Graph databases, specifically MySQL, Neo4j, Elastic, Redis, etc. with hands-on experience in using AWS technologies like EC2 lambda functions, SNS, SQS and worked on serverless architecture and having an exceptional track record in cloud ops for a live app ● Branching and Version Control best practices ● Expertise in building scalable micro-services, database design, and service architecture ● Solid foundation in computer science with strong competency in OOPS, data structures, algorithms, and software design ● Strong Linux skills with troubleshooting, monitoring, and log file setup/analysis experience ● Troubleshooting application and code issues ● Knowledge setting up unit tests ● Understanding of system design ● Updating and altering application features to enhance performance ● Writing clean, high-quality, high-performance, maintainable code, and participating in code reviews ● Coordinate cross-functionally to ensure the project meets business objectives and compliance standards ● Experience with Agile or Scrum software development methodologies ● Knowledge expected in Cloud Computing, threading, performance tuning, and security Preferred Qualifications: ● High ownership & right attitude towards work ● Interest in learning new tools and technologies ● Proficiency in designing and coding web applications and/or services, ensuring high quality and performance, fixing application bugs, maintaining the code, and deploying apps to various environments ● Bachelor’s degree in Computer Science or Software Engineering preferred

Posted 1 month ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

We are seeking a results-driven Senior Data Engineer for our German client to lead the development and implementation of advanced data infrastructure with a strong focus on graph databases (Neo4j) and cloud-based architecture (AWS, GCP, Azure). In this role, you will be responsible for transforming complex data systems into scalable, secure, and high-performing platforms to power next-generation analytics. This is an opportunity to drive meaningful outcomes while building expertise in a fast-moving, DevOps-oriented environment. Career growth opportunities include technical leadership and potential progression into data architecture or platform engineering leadership roles. Job Responsibilities Lead the design and implementation of a scalable graph data architecture leveraging Neo4j, ensuring optimal query performance and alignment with business analytics needs within the first 90 days. Own the full lifecycle of data ingestion and ETL pipelines , including development, orchestration, monitoring, and optimization of data flows across multiple cloud environments, with measurable reduction in latency and improved data quality within 6 months. Deploy and maintain infrastructure using Terraform , supporting infrastructure-as-code best practices across AWS and GCP, ensuring automated, reproducible deployments and >99.9% system uptime. Automate CI/CD workflows using GitHub Actions , reducing manual operations by 50% within the first 6 months through effective automation of test, deploy, and monitoring pipelines. Collaborate with cross-functional engineering, data science, and security teams to integrate and support graph-based solutions in production with clear documentation, support structures, and knowledge sharing. Establish and enforce data security and access protocols , working with DevOps and InfoSec to ensure compliance with organizational standards by month 9. Desired Qualifications Demonstrated success designing and maintaining production-grade graph database systems (preferably Neo4j). Strong command of Cypher, SQL, and Python for scalable data engineering solutions. Hands-on experience with Terraform , including managing multi-cloud infrastructure. Proven capability in building robust, reliable ETL pipelines and orchestrating workflows. Experience with CI/CD tools like GitHub Actions in a production DevOps environment. Strong communication skills, able to explain complex data concepts clearly across teams. Ability to independently identify and implement performance optimizations in data systems.

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At Draup, you will be engaged in handling huge data sets while keeping scalability and performance in mind. You will consistently face challenges that require you to develop fast, efficient, and optimized solutions on the fly! Backend Engineers at Draup play a vital role in catalyzing the transformation of data into tangible business value for global enterprises. We work with best-in-class Backend tools and cutting-edge solutions, along with the most flexible and scalable deployment options. You will be working with a proficient, smart, and experienced team of developers, researchers, and co-founders for the product development use cases. Responsibilities Building highly scalable services to handle terabytes of data. Finding innovative ideas to solve highly complex engineering problems. Push the boundaries of performance, scale, and reliability of the Draup's core services. Own, execute, and deliver products/features end-to-end from planning to design to development to deployment. Be the go-to person for the team for guidance and troubleshooting. Proactively involved in code reviews, architecture, and design discussions. Be a mentor to junior engineers or Interns for their deliverables to succeed. Maintain engineering standards and best practices for coding, code reviews, and releases. Requirements 2 years of expertise in software development with one or more of the general programming languages (e. g., Python, Java, C/C++, Go). Experience in Python and Django is recommended. Deep understanding of how to build an application with optimized RESTful APIs. Knowledge of a web framework like Django or similar, with ORM or multi-tier, multi-DB-based data-heavy web application development, will help your profile stand out. With an entrepreneurial mindset. Capable of quick problem-solving without sacrificing good design. An out-of-the-box thinker with good code discipline. Knowledge of Gen AI tools and technologies is a plus. Sound knowledge of SQL queries & DB like PostgreSQL(must) or MySQL. Working knowledge of NoSQL DBs (Elasticsearch, Mongo, Redis, etc. ) is a plus. Knowledge of graph DB like Neo4j or AWS Neptune adds extra credits to your profile. Knowing queue-based messaging frameworks like Celery, RQ, Kafka, etc., and understanding distributed systems will be advantageous. Understands a programming language's limitations to exploit the language be behavior to the fullest potential. Understanding of accessibility and security compliance. Ability to communicate complex technical concepts to both technical and non-technical audiences with ease. Diversity in skillslike version control tools, CI/CD, cloud basics, good debugging skills, and test-driven development, will help your profile stand out. This job was posted by Suchanya Shetty from Draup.

Posted 1 month ago

Apply

6.0 - 9.0 years

32 - 35 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Rust Developer to build safe, concurrent, and high-performance applications for system-level or blockchain development. Key Responsibilities: Develop applications using Rust and its ecosystem (Cargo, Crates) Write memory-safe and zero-cost abstractions for systems or backends Build RESTful APIs, CLI tools, or blockchain smart contracts Optimize performance using async/await and ownership model Ensure safety through unit tests, benchmarks, and fuzzing Required Skills & Qualifications: Proficient in Rust , lifetimes , and borrowing Experience with Tokio , Actix , or Rocket frameworks Familiarity with WebAssembly , blockchain (e.g. Substrate) , or embedded Rust Bonus: Background in C/C++ , systems programming, or cryptography Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Reddy Delivery Manager Integra Technologies

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Data Scientist Associate Responsibilities As a selected intern, your daily tasks will include: Engaging in data science projects and analytics. Developing and implementing data models, AI, ML, deep learning, NLP, GenAI, LangChain, LLM, LLAMA, OpenAI, and GPT-based solutions. Managing data pipelines, ETL/ELT processes, and data warehousing. Utilizing Python and its libraries for advanced programming tasks. Handling data collection, management, cleaning, and transformation. Creating data visualizations using BI tools such as Power BI, Kibana, and Google Data Studio. Working with databases like MongoDB, Neo4j, Dgraph, and SQL. Leveraging cloud platforms, including GCP, AWS, Azure, Linode, and Heroku. Required Skills Python Flask Django MongoDB API Development Elasticsearch Machine Learning Artificial Intelligence Job Details Work Mode: Remote (Work From Home) Start Date: Immediate Duration: 6 months Stipend: ₹10,000 – ₹12,000 per month Industry: Information Technology & Services Employment Type: Probation of 6 Months followed by Full-time Position based on performance

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

AI Developer – Student Engagement & Reporting Tool Full-time · Gurugram (onsite ) · Immediate start Why we’re hiring Growth Valley Community (GVC) turns ambitious teens into real-world builders. Our next leap is the Our AI tool —an AI layer that ingests every student artefact. We need an engineer who can own that engine end-to-end. What you’ll build Multi-agent pipeline – craft & fine-tune LLM prompts that tag skills, spot gaps, and recommend next steps. Engagement brains – auto-generate nudges, streaks and Slack / WhatsApp messages that keep WAU > 75 %. Parent & school dashboards – real-time, privacy-safe progress views (FastAPI + React). Data plumbing – vector DB (Pinecone/Weaviate) + graph DB (Neo4j) + Supabase/Postgres for event firehose. Guardrails – PII redaction, hallucination tests, compliance logging. Must-haves 2 + yrs Python ML/infra (TensorFlow/PyTorch, FastAPI, Docker). Hands-on with LLM APIs (OpenAI, Anthropic, or similar) & prompt engineering. Experience shipping production rec-sys or NLP features at scale. Comfortable with vector search, embeddings, and at least one graph database. Git, CI/CD, cloud (AWS/GCP) second nature. Nice-to-haves Ed-tech or youth-focused product background. LangChain / LlamaIndex, Supabase Edge Functions. Basic front-end chops (React, Next.js) for rapid UI tweaks. Culture & perks Speed over slide decks We demo every Friday—shipping beats pitching. Lean squad Impact Your model recommendations land in 5 k+ teen inboxes each week. How to apply Send a GitHub / Kaggle link + résumé to sam@growthvalleycommunity.com with subject “AI Dev – ” . Include one paragraph on the coolest ML system you’ve shipped (numbers, not adjectives). We reply to every application within five working days and run a single 90-min tech interview (live coding + system design). Build the engine that maps teen talent to tangible outcomes—and see your models change lives, not ad clicks.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Director- Data & Analytics Engineer GCL - F Introduction To Role As the Director of Data & Analytics Engineering, you'll be at the forefront of revolutionizing how competitive intelligence is created, shared, and consumed within AstraZeneca. Through Connected Insights, we are making our data Findable, Accessible, Interoperable, and Re-usable (FAIR). You'll architect solutions that ensure unstructured data is readily available for AI applications, using innovative technologies like vector databases and knowledge graphs. Your leadership will guide a team of dedicated engineers in developing scalable data solutions and database enhancements for large-scale products. Are you prepared to drive innovation and excellence in data engineering? Accountabilities Data Platform Design and Implementation Design and implement advanced data capabilities, including auto ingestion, data cataloguing, automated access control, lifecycle management, backup & restore, and AI-ready data structures. Implement vector databases and knowledge graphs to support AI and machine learning initiatives. AI-Focused Solution Architecture Collaborate with AI Engineering leads and Architects to design AI-ready data architectures. Analyze data requirements for AI applications, modeling both structured and unstructured data sources. ETL and Data Processing Implement optimal ETL workflows using SQL, APIs, ETL tools, AWS big data technologies, and AI-specific techniques. Develop processes to prepare data for AI model training and inference. AI Integration and Technical Leadership Lead technical deliveries across multiple initiatives, focusing on integrating AI capabilities into existing data solutions. Provide technical feedback on design, architecture, and integration of AI-enhanced data sourcing platforms. Collaborator, Teamwork and Problem Solving Liaise with technical infrastructure teams to resolve issues impacting AI application performance. Engage with architects, product owners, and business stakeholders to ensure efficient engineering of AI-driven data solutions. Agile Project Management Lead a dedicated Data pod including managing backlogs, sprints, and planning. Collaborate with product pods to help them meet their deliveries. Standards and Best Practices Define data engineering and AI integration standards in collaboration with architects and AI Engineering leads. Establish standard processes for managing AI model versioning and data lineage. Quality Assurance and Documentation Test, document, and quality assess new data and AI solutions. Implement robust testing frameworks for AI models and data pipelines. Research and Development Explore emerging AI technologies and drive their integration into existing data infrastructure. Technical Problem Solving and Innovation Adopt a "can-do" approach to technical challenges related to AI integration. Coach team members on solving complex AI and data engineering problems. Team Leadership and Development Build and support your team through hiring, coaching, and mentoring. Foster a culture of continuous learning in AI and data technologies. Code and Design Quality Perform regular quality checks of both data engineering and AI-related code. Guide engineers on design patterns emphasizing AI-specific considerations. Data Interoperability and FAIR Principles Lead initiatives to enhance data interoperability through rich metadata. Ensure all data solutions align with FAIR principles. Knowledge Graph Development Be responsible for the design, implementation, and maintenance of knowledge graphs using Neo4j. Integrate knowledge graphs with AI applications to enhance data context. Essential Skills/Experience Must have a B.Tech/M.Tech/MSc in Computer Science, Engineering, or related field. Experience in leading data engineering teams to deliver robust and scalable data products, with a focus on preparing datasets for AI/ML use cases. Deep expertise in the AWS data engineering ecosystem (SNS, SQS, Lambda, Glue, S3, EMR, log management, AWS containers, EC2, EBS, access control, data streaming, AWS CLI & SDK, backup & restore, etc). Excellent programming skills in Python or Java, including Object-Oriented Programming, and proficiency with Airflow, Apache Spark, source control (GIT), and versioning. Extensive experience in designing, building, and optimizing large-scale data pipelines, including ingestion, transformation, and orchestration using tools such as Airflow. Familiarity with Snowflake tools and services. Hands-on experience with metadata management and the application of controlled vocabularies and ontologies to ensure data interoperability and discoverability. Working knowledge of vector databases and implementing semantic search capabilities for unstructured and semi-structured datasets. Strong understanding of data modelling concepts, SQL (including advanced SQL), and database design—especially for unstructured and semi-structured data (XML, JSON). Experience designing data cataloguing, auto-ingestion, automated access control, lifecycle management, backup & restore, and other self-service data management features. Exposure to software engineering CI/CD processes, including implementation of automated testing, build, release, deployment, containerization, and configuration management. Experience using JIRA, Confluence, and other tools to manage Agile and SAFe project delivery. Strong communication, teamwork, and mentoring skills, with the ability to build, coach, and guide high-performing data engineering teams focused on AI/ML objectives. Desirable Skills/Experience Demonstrated experience in developing knowledge graphs (e.g., with Neo4j) and making data AI-ready for Retrieval-Augmented Generation (RAG) and Generative AI (GenAI) applications. When we put unexpected teams in the same room, we fuel ambitious thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, you'll find yourself at the heart of innovation where impactful work meets large-scale transformation. We connect across the business to influence patient outcomes positively while driving pioneering change towards becoming a digital enterprise. Collaborate with leading experts using innovative techniques to turn complex information into practical insights that improve lives globally. Our inclusive team grows with diversity—bringing together different functions to decode business needs effectively. Here is where you can raise your personal profile through publishing work or showcasing your achievements. Ready to make a difference? Apply now to join our dynamic team! Date Posted 27-Jun-2025 Closing Date 20-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 1 month ago

Apply

0 years

7 - 8 Lacs

Chennai

On-site

Job Title: Director- Data & Analytics Engineer GCL - F Introduction to role: As the Director of Data & Analytics Engineering, you'll be at the forefront of revolutionizing how competitive intelligence is created, shared, and consumed within AstraZeneca. Through Connected Insights, we are making our data Findable, Accessible, Interoperable, and Re-usable (FAIR). You'll architect solutions that ensure unstructured data is readily available for AI applications, using innovative technologies like vector databases and knowledge graphs. Your leadership will guide a team of dedicated engineers in developing scalable data solutions and database enhancements for large-scale products. Are you prepared to drive innovation and excellence in data engineering? Accountabilities: 1. Data Platform Design and Implementation Design and implement advanced data capabilities, including auto ingestion, data cataloguing, automated access control, lifecycle management, backup & restore, and AI-ready data structures. Implement vector databases and knowledge graphs to support AI and machine learning initiatives. 2. AI-Focused Solution Architecture Collaborate with AI Engineering leads and Architects to design AI-ready data architectures. Analyze data requirements for AI applications, modeling both structured and unstructured data sources. 3. ETL and Data Processing Implement optimal ETL workflows using SQL, APIs, ETL tools, AWS big data technologies, and AI-specific techniques. Develop processes to prepare data for AI model training and inference. 4. AI Integration and Technical Leadership Lead technical deliveries across multiple initiatives, focusing on integrating AI capabilities into existing data solutions. Provide technical feedback on design, architecture, and integration of AI-enhanced data sourcing platforms. 5. Collaborator, Teamwork and Problem Solving Liaise with technical infrastructure teams to resolve issues impacting AI application performance. Engage with architects, product owners, and business stakeholders to ensure efficient engineering of AI-driven data solutions. 6. Agile Project Management Lead a dedicated Data pod including managing backlogs, sprints, and planning. Collaborate with product pods to help them meet their deliveries. 7. Standards and Best Practices Define data engineering and AI integration standards in collaboration with architects and AI Engineering leads. Establish standard processes for managing AI model versioning and data lineage. 8. Quality Assurance and Documentation Test, document, and quality assess new data and AI solutions. Implement robust testing frameworks for AI models and data pipelines. 9. Research and Development Explore emerging AI technologies and drive their integration into existing data infrastructure. 10. Technical Problem Solving and Innovation Adopt a "can-do" approach to technical challenges related to AI integration. Coach team members on solving complex AI and data engineering problems. 11. Team Leadership and Development Build and support your team through hiring, coaching, and mentoring. Foster a culture of continuous learning in AI and data technologies. 12. Code and Design Quality Perform regular quality checks of both data engineering and AI-related code. Guide engineers on design patterns emphasizing AI-specific considerations. 13. Data Interoperability and FAIR Principles Lead initiatives to enhance data interoperability through rich metadata. Ensure all data solutions align with FAIR principles. 14. Knowledge Graph Development Be responsible for the design, implementation, and maintenance of knowledge graphs using Neo4j. Integrate knowledge graphs with AI applications to enhance data context. Essential Skills/Experience: Must have a B.Tech/M.Tech/MSc in Computer Science, Engineering, or related field. Experience in leading data engineering teams to deliver robust and scalable data products, with a focus on preparing datasets for AI/ML use cases. Deep expertise in the AWS data engineering ecosystem (SNS, SQS, Lambda, Glue, S3, EMR, log management, AWS containers, EC2, EBS, access control, data streaming, AWS CLI & SDK, backup & restore, etc). Excellent programming skills in Python or Java, including Object-Oriented Programming, and proficiency with Airflow, Apache Spark, source control (GIT), and versioning. Extensive experience in designing, building, and optimizing large-scale data pipelines, including ingestion, transformation, and orchestration using tools such as Airflow. Familiarity with Snowflake tools and services. Hands-on experience with metadata management and the application of controlled vocabularies and ontologies to ensure data interoperability and discoverability. Working knowledge of vector databases and implementing semantic search capabilities for unstructured and semi-structured datasets. Strong understanding of data modelling concepts, SQL (including advanced SQL), and database design—especially for unstructured and semi-structured data (XML, JSON). Experience designing data cataloguing, auto-ingestion, automated access control, lifecycle management, backup & restore, and other self-service data management features. Exposure to software engineering CI/CD processes, including implementation of automated testing, build, release, deployment, containerization, and configuration management. Experience using JIRA, Confluence, and other tools to manage Agile and SAFe project delivery. Strong communication, teamwork, and mentoring skills, with the ability to build, coach, and guide high-performing data engineering teams focused on AI/ML objectives. Desirable Skills/Experience: Demonstrated experience in developing knowledge graphs (e.g., with Neo4j) and making data AI-ready for Retrieval-Augmented Generation (RAG) and Generative AI (GenAI) applications. When we put unexpected teams in the same room, we fuel ambitious thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, you'll find yourself at the heart of innovation where impactful work meets large-scale transformation. We connect across the business to influence patient outcomes positively while driving pioneering change towards becoming a digital enterprise. Collaborate with leading experts using innovative techniques to turn complex information into practical insights that improve lives globally. Our inclusive team grows with diversity—bringing together different functions to decode business needs effectively. Here is where you can raise your personal profile through publishing work or showcasing your achievements. Ready to make a difference? Apply now to join our dynamic team!

Posted 1 month ago

Apply

6.0 - 8.0 years

25 - 27 Lacs

Chennai

Work from Office

Experience 6 - 8 years. Skills: A strong level of proficiency in python programming. Practical knowledge and working experience on Statistics and Operation Research methods. Practical knowledge and working experience in tools and frameworks like Flask,PySpark,Pytorch,tensorflow, keras, Databricks, OpenCV, Pillow/PIL, streamlit, d3js, dashplotly, neo4j. Hands on experience in Analytics/AI-ML AWS services like Sagemaker, Canvas, Bedrock. Good understanding of how to apply predictive and machine learning techniques like regression models, XGBoost, random forest, GBM, Neural Nets, SVM etc. Proficient with NLP techniques like RNN, LSTM and Attention based models and effectively handle readily available stanford, IBM, Azure, Open AI NLP models. Good understanding of SQL from a perspective of how to write efficient queries for pulling the data from database. Hands on experience on any version control tool (github, bitbucket). Experience of deploying ML models into production environment experience (MLOps) in any one of the cloud platforms like Azure and AWS. Understanding business needs / mapping it to the business processes. Hands on experience in agile project delivery. Good in conceptualizing and visualizing end to end business needs both at high level as well as detailed. Good in articulating the business needs. Good analytical and problem-solving skills. Good communication, listening and probing skills. Strong inter-personnel skills. Should collaborate with other team members and work as team. Job Description: Comprehend business issues and propose valuable business solutions. Design Factual or AI/profound learning models to address business issues. Design Statistical Models/ML/DL models and deploy them for production. Formulate what information is accessible from where and how to augment it. Develop innovative graphs for data comprehension using d3js, dashplotly and neo4j. Preferred Certification: (Good to have) AWS Specialty Certification in Data Analytics, Machine Learning

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Main skills - Strong hands-on experience in Neo4j Cypher language and graph data science Strong knowledge of key ML / DL Algorithms like Random Forests, XG Boost, SVM, Neural Networks etc. Strong knowledge of machine learning, deep learning, NLP, computer vision and Generative AI frameworks is preferred Experience in AWS

Posted 1 month ago

Apply

6.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. Were looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like youThen it seems like youd make a great addition to our vibrant team. We are looking for Semantic Web ETL Developer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environmentWhat is the user story based onImplementation means- trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange- discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. YOULL MAKE A DIFFERENCE BY: - Implementing innovative Products and Solution Development processes and tools by applying your expertise in the field of responsibility. JOB REQUIREMENTS/ S: - International experience with global projects and collaboration with intercultural team is preferred - 6 - 8 years experience on developing software solutions with Python language. - Experience in research and development processes (Software based solutions and products) ; in commercial topics; in implementation of strategies, POCs - Manage end-to-end development of web applications and knowledge graph projects, ensuring best practices and high code quality. - Provide technical guidance and mentorship to junior developers, fostering their growth and development. - Design scalable and efficient architectures for web applications, knowledge graphs, and database models. - Carry out code standards and perform code reviews, ensuring alignment with standard methodologies like PEP8, DRY, and SOLID principles. - Collaborate with frontend developers, DevOps teams, and database administrators to deliver cohesive solutions. - Strong and Expert-like proficiency in Python web frameworks Django, Flask, FAST API, Knowledge Graph Libraries. - Experience in designing and developing complex RESTful APIs and microservices architectures. - Strong understanding of security standard processes in web applications (e.g., authentication, authorization, and data protection). - Extensive experience in building and querying knowledge graphs using Python libraries like RDFLib, Py2neo, or similar. - Proficiency in SPARQL for advanced graph data querying. - Experience with graph databases like Neo4j, GraphDB, or Blazegraph.or AWS Neptune - Experience in expert functions like Software Development / Architecture, Software Testing (Unit Testing, Integration Testing) - Excellent in DevOps practices, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). - Excellent in Cloud technologies and architecture. Should have exposure on S3, EKS, ECR, AWS Neptune - Exposure to and working experience in the relevant Siemens sector domain (Industry, Energy, Healthcare, Infrastructure and Cities) required. LEADERSHIP QUALITIES - Visionary LeadershipAbility to lead the team towards long-term technical goals while managing immediate priorities. - Strong CommunicationGood interpersonal skills to work effectively with both technical and non-technical stakeholders. - Mentorship & CoachingFoster a culture of continuous learning, skill development, and collaboration within the team. - Conflict ResolutionAbility to manage team conflicts and provide constructive feedback to improve team dynamics. Create a better #TomorrowWithUs! This role is in Bangalore, where youll get the chance to work with teams impacting entire cities, countries- and the craft of things to come. Were Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow s reality. Find out more about the Digital world of Siemens here/digitalminds (http:///digitalminds)

Posted 1 month ago

Apply

7.0 - 12.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Key Responsibilities Lead the overall architecture and development of the Internal Developer Portal Design and implement the core blueprint system and graph based data model Mentor team members and coordinate development efforts Ensure code quality performance and security best practices Communicate with stakeholders and manage project timelines Required Skills 8 years of experience with TypeScript JavaScript development 5 years of experience with React and modern frontend development Strong experience with NestJS or similar Node js frameworks Expert level knowledge of GraphQL API design and implementation Experience with graph databases Neo4j Experience with microservices architecture and event driven systems Experience with Kafka or similar message brokers Strong system design and architectural skills 3 years of experience leading development teams Experience with developer portals internal platforms or similar tools Knowledge of Kubernetes and cloud native technologies Experience with CI CD systems and DevOps practices Desired Skills Familiarity with Restate or similar workflow engines Experience with durable execution patterns Knowledge of enterprise security patterns Experience with high performance data visualization

Posted 1 month ago

Apply

12.0 - 17.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Responsibilities: Architect and implement scalable, secure, and high-performance full-stack solutions using Python, React, and modern web technologies. Design and develop RESTful and GraphQL APIs with a focus on modularity and reusability. Integrate and optimize Neo4j graph databases for complex relationship-based data models. Define and implement data fabric strategies to unify and virtualize data access across systems. Collaborate with UX and data teams to design intuitive front-end interfaces that interact seamlessly with the data layer. Deploy and manage containerized applications using Kubernetes and Docker. Implement caching and real-time data strategies using Redis. Create and maintain comprehensive architecture documentation, including: Use case definitions UML sequence diagrams Component and deployment diagrams Data flow and integration models Lead architectural reviews and ensure alignment with enterprise standards. Mentor development teams and provide technical leadership across projects. Required Skills & Experience: Strong proficiency in Python (Flask, FastAPI, Django) and JavaScript/TypeScript with React. Expertise in API design, including REST and GraphQL. Hands-on experience with Neo4j and Cypher. Solid understanding of Kubernetes, Docker, and microservices architecture. Experience with Redis for caching and pub/sub patterns. Familiarity with data fabric concepts and metadata-driven design. Proficiency in architecture documentation, including: UML modeling (sequence, class, component diagrams) Use case and system interaction documentation Technical specifications and design blueprints Strong communication and collaboration skills.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Pune

Work from Office

To be successful in this role, you should meet the following requirements(Must have ) Payments and Banking experience is a must. Experience in implementing and monitoring data governance using standard methodology throughout the data life cycle, within a large organisation. Demonstrate up-to-date knowledge of data governance theory, standard methodology and the practical considerations. Demonstrate knowledge of data governance industry standards and tools. Overall experience of 10+ years experience in Data governance, encompassing Data Quality management, Master data management, Data privacy & compliance, Data cataloguing and metadata management, Data security, maturity and lineage. Prior experience in implementing an end-to-end data governance framework. Experience in Automating Data cataloguing, ensuring accurate, consistent metadata, making data easily discoverable and usable. Domain experience across the payments and banking lifecycle. An analytical mind and inclination for problem-solving, with an attention to detail. Ability to effectively navigate and deliver transformation programmes in large global financial organisations, amidst the challenges posed by bureaucracy, globally distributed teams and local data regulations. Strong communication skills coupled with presentation skills of complex information and data. A first-class degree in Engineering or relevant field - with 2 or more of the following subjects as a major Mathematics, Computer Science, Statistics, Economics. The successful candidate will also meet the following requirements(Good to have ) Database TypeRelational, NoSQL, DocumentDB DatabasesOracle, PostgreSQL, BigQuery, Big Table, MongoDB, Neo4j Experience in Conceptual/ Logical/Physical Data Modeling. Experience in Agile methodology and leading agile delivery, aligned with organisational needs. Effective leader as well as team player with a strong commitment to quality and efficiency.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 16 Lacs

Pune

Work from Office

To be successful in this role, you should meet the following requirements(Must have ) Expertise in Conceptual/ Logical/Physical Data Modeling. Payments and Banking experience is a must. Database Design Database TypeRelational, NoSQL, DocumentDB DatabasesOracle, PostgreSQL, BigQuery, Big Table, MongoDB, Neo4j ToolsErwin, Visual Paradigm Solid experience in PL/SQL, Python, Unix Shell scripting, Java. Domain experience across the payments and banking lifecycle. An analytical mind and inclination for problem-solving, with an attention to detail. Sound knowledge of payments workflows and statuses across various systems within a large global bank. Experience in collecting large data sets, identifying patterns and trends in data sets. Overall experience of 10+ years, with considerable experience in Big Data and Relational Database. Prior experience across requirements gathering, build and implementation, stakeholder co-ordination, release management and production support. Ability to effectively navigate and deliver transformation programmes in large global financial organisations, amidst the challenges posed by bureaucracy, globally distributed teams and local data regulations. Strong communication skills coupled with presentation skills of complex information and data Strong communication skills coupled with presentation skills of complex information and data. A first-class degree in Engineering or relevant field with 2 or more of the following subjects as a major Mathematics, Computer Science, Statistics, Economics. The successful candidate will also meet the following requirements(Good to have ) Understanding of DevOps and CI tools (Jenkins, Git, Grunt, Bamboo, Artifactory) would be an added advantage. Experience in Agile methodology and leading agile delivery, aligned with organisational needs. Effective leader as well as team player with a strong commitment to quality and efficiency.

Posted 1 month ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Key Responsibilities Design and implement the backend services and APIs Develop integration frameworks for external systems Implement workflow engine and automation capabilities using Restate Create event bus and webhook system with Kafka Ensure system security performance and reliability Required Skills 6 years of experience with TypeScript Node js backend development Strong experience with NestJS or similar backend frameworks Experience with database design and ORM frameworks Expert knowledge of API design REST GraphQL Experience with Apache Kafka and event driven architectures Experience with Restate or willingness to quickly become proficient Strong understanding of authentication and authorization systems Extensive experience with integration patterns and API development Experience with Neo4j or other graph databases Experience with cloud provider APIs AWS GCP Azure Desired Skills Experience with CQRS and event sourcing patterns Knowledge of BPM Business Process Management systems Experience with service mesh technologies Understanding of compliance and governance frameworks

Posted 1 month ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Data Modeller JD: We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration with Data Architect: Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality and Governance: Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling ToolsERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. DatabasesSQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing TechnologiesSnowflake, Teradata, or similar. ETL ToolsInformatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data TechnologiesHadoop, Spark (optional but preferred). TechnologiesExperience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory)

Posted 1 month ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Neo4j, Stardog Good to have skills : JavaMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Neo4j.- Good To Have Skills: Experience with Java.- Strong understanding of data modeling and graph database concepts.- Experience with data integration tools and ETL processes.- Familiarity with data quality frameworks and best practices.- Proficient in programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Neo4j.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies