Jobs
Interviews

76 Graph Databases Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

4 - 8 Lacs

Chennai

Work from Office

Overview Angular with an exp of 5 to 7 yrs, knowledge of Java would be great. Responsibilities Experience in designing & development in UI technologies like Angular/React, HTML, CSS and lead from an Engineering point of view. Jasmine, Karma and Istanbul tools understanding and hands on experience. Design and build next gen UI Visualizers in 2D, 3D with backend integration experience. Build reusable code and libraries for future use Understand various metrics for different types of test strategy for UI components. To analyze problems and propose solutions with high quality and familiarity with browser testing and debugging. Work closely with Functional designers, UX designers and other tech leads on overall E2E app strategy. Good understanding of telecom OSS domain. Experience on working on Graph DB [Neo4J/Orient DB] Ability to deep dive on technical areas and get the best outcome out of technically challenging situations. Possess good hands-on experience and be able to mentor technical teams. Understand need of multiple projects and communicate them divisionally and/or cross-divisionally Adopt new and emerging technologies to provide solutions to meet challenging needs. Ability to debug complex issues and provide the right solution Ability to drive and validate technical and functional designs and lead to implementation. May involve liaising with internal, external and third-party suppliers. has context menu Essentials Implemented different apps on various platforms like Angular, React, JS, HTML and CSS and rapid prototyping. Proficient understanding of client-side scripting and JavaScript frameworks, including jQuery Good understanding of asynchronous request handling, partial page updates, and AJAX Hands-on coding ability and strong analytical skills to trouble shoot and provide technological solutions using UI design Patterns, Oracle, PLSQL, Weblogic and JavaScript. In-depth understanding of the entire web development process (design, development and deployment) Experience working in an Agile/Scrum development process Proficient understanding of code versioning tools, such as Git, SVN and experience in building CICD pipelines for UI projects. Should possess good telecom OSS knowledge in areas like Planning, Inventory management, capacity management, orchestration and Activation. Working knowledge of performance tuning of application and continuous integration techniques. Effective verbal and written communication skills. Service Design patterns implementation and knowledge Desirables Working with large Telco service providers is a plus Telecom & Industry certifications Past experience working on Graph database [Any flavor] and use cases related to traversal will be an added advantage. Experience in working with geographically dispersed teams

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Hyderabad

Work from Office

What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelors degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 1 month ago

Apply

1.0 - 4.0 years

2 - 6 Lacs

Mumbai, Pune, Chennai

Work from Office

Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad

Posted 1 month ago

Apply

10.0 - 15.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Hiring for expert in Semantic Web (RDF, OWL, SPARQL), API-led architecture (REST/SOAP), Graph & NoSQL DBs, Java/Python, web tech (HTML, XML), UML, CI/CD, and cloud (AWS/Azure). Experience with Neo4j, Amazon Neptune, and microservices a plus.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Software Architect Bengaluru, India Get to know Okta Okta is The World s Identity Company. We free everyone to safely use any technology anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We re building a world where Identity belongs to you. TitleArchitect, Meta-Directory and LCM Groups Company Description: Okta is the leading independent provider of enterprise identity. The Okta Identity Cloud enables organizations to securely connect the right people to the right technologies at the right time. With over 6,500 pre-built integrations to applications and infrastructure providers, Okta customers can easily and securely use the best technologies for their business. Over 7,950 organizations, including 20th Century Fox, JetBlue, Nordstrom, Slack, Teach for America and Twilio, trust Okta to help protect the identities of their workforces and customers Position Description: The Architect of Meta-Directory and Lifecycle Management (LCM) Engineering Groups will drive the technology vision for the Software Engineering organization responsible for building a platform that provides directory services, single sign-on, strong authentication, provisioning, workflow, and built in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on premises applications, directories, and identity management systems. The Meta-Directory and LCM Engineering Groups are responsible for highly impactful customer-oriented products and solutions. The products will be built with ease of use in mind and allow customers to solve hard problems including Delegated Auth, SSO, ETL, and Identity Management via SCIM. You have experience developing enterprise-grade software in an object-oriented language, experience or knowledge in security, authorization, or identity. You will bring out the best in each engineer, hire and develop talent, while ensuring the highest quality of software. Job Duties and Responsibilities: Drive the execution of the vision of Meta-Directory and LCM efforts by partnering closely with Product Management and senior Architects Work with product and engineering teams to scope and plan engineering efforts Identify/Define/Refine Engineering Design processes to streamline product delivery and ensure quality Facilitate project planning and execution of your teams, and across other teams to ensure prompt delivery Participate in role specific engineering rotations geared towards supporting the live operation of Okta s SaaS Handle customer escalations directly and indirectly through Support Participate in architecture reviews and discussions Collaborate effectively with a matrixed organization of QA, Documentation, Product Management, and UX teams Required knowledge, skills, and abilities: 10 years or more or progressively increasing design and architecture responsibility roles in software engineering, with a strong background in software development Prior experience in identity, authentication, authorization, entitlements, or security Experience working in Agile software development organizations leveraging continuous integration and deployment practices Experience working on low latency, highly scalable, multi-tenant, mission critical systems and large scale (multi-continent) SaaS applications Desirable knowledge, skills, and abilities: Experience building large-scale enterprise software or SaaS products Understanding of Identity and Access Management protocols and technologies (OIDC, SAML, XACML, SCIM, OAuth, Federation, etc.) Understanding of front-end and server-side technologies, and databases (Java, Spring, React, Node.js, GraphQL, MySQL, Graph DB, etc.) Understanding and appreciation for Microservices, Test Driven Development, continuous improvement of systems and address technical debt An engineer or technologist at heart Experience in RESTful API design Education and training: B.S. Computer Science or related field (MS or MBA or PhD preferred) #LI-Hybrid What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/ . Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/ . U.S. Equal Opportunity Employment Information Read more Individuals seeking employment at this company are considered without regards to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. When submitting your application above, you are being given the opportunity to provide information about your race/ethnicity, gender, and veteran status. Completion of the form is entirely voluntary . Whatever your decision, it will not be considered in the hiring process or thereafter. Any information that you do provide will be recorded and maintained in a confidential file. If you believe you belong to any of the categories of protected veterans listed below, please indicate by making the appropriate selection. As a government contractor subject to Vietnam Era Veterans Readjustment Assistance Act (VEVRAA), we request this information in order to measure the effectiveness of the outreach and positive recruitment efforts we undertake pursuant to VEVRAA. Classification of protected categories is as follows: A "disabled veteran" is one of the followinga veteran of the U.S. military, ground, naval or air service who is entitled to compensation (or who but for the receipt of military retired pay would be entitled to compensation) under laws administered by the Secretary of Veterans Affairs; or a person who was discharged or released from active duty because of a service-connected disability. A "recently separated veteran" means any veteran during the three-year period beginning on the date of such veteran's discharge or release from active duty in the U.S. military, ground, naval, or air service. An "active duty wartime or campaign badge veteran" means a veteran who served on active duty in the U.S. military, ground, naval or air service during a war, or in a campaign or expedition for which a campaign badge has been authorized under the laws administered by the Department of Defense. An "Armed forces service medal veteran" means a veteran who, while serving on active duty in the U.S. military, ground, naval or air service, participated in a United States military operation for which an Armed Forces service medal was awarded pursuant to Executive Order 12985. Pay Transparency Okta complies with all applicable federal, state, and local pay transparency rules. For additional information about the federal requirements, click here . Voluntary Self-Identification of Disability Form CC-305 Page 1 of 1 OMB Control Number 1250-0005 Expires 04/30/2026 Why are you being asked to complete this form We are a federal contractor or subcontractor. The law requires us to provide equal employment opportunity to qualified people with disabilities. We have a goal of having at least 7% of our workers as people with disabilities. The law says we must measure our progress towards this goal. To do this, we must ask applicants and employees if they have a disability or have ever had one. People can become disabled, so we need to ask this question at least every five years. Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/ofccp. Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor s Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/agencies/ofccp . How do you know if you have a disability A disability is a condition that substantially limits one or more of your major life activities. If you have or have ever had such a condition, you are a person with a disability. Disabilities include, but are not limited to: Alcohol or other substance use disorder (not currently using drugs illegally) Autoimmune disorder, for example, lupus, fibromyalgia, rheumatoid arthritis, HIV/AIDS Blind or low vision Cancer (past or present) Cardiovascular or heart disease Celiac disease Cerebral palsy Deaf or serious difficulty hearing Diabetes Disfigurement, for example, disfigurement caused by burns, wounds, accidents, or congenital disorders Epilepsy or other seizure disorder Gastrointestinal disorders, for example, Crohn's Disease, irritable bowel syndrome Intellectual or developmental disability Mental health conditions, for example, depression, bipolar disorder, anxiety disorder, schizophrenia, PTSD Missing limbs or partially missing limbs Mobility impairment, benefiting from the use of a wheelchair, scooter, walker, leg brace(s) and/or other supports Nervous system condition, for example, migraine headaches, Parkinson s disease, multiple sclerosis (MS) Neurodivergence, for example, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia, dyspraxia, other learning disabilities Partial or complete paralysis (any cause) Pulmonary or respiratory conditions, for example, tuberculosis, asthma, emphysema Short stature (dwarfism) Traumatic brain injury PUBLIC BURDEN STATEMENTAccording to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete. Okta The foundation for secure connections between people and technology Okta is the leading independent provider of identity for the enterprise. The Okta Identity Cloud enables organizations to securely connect the right people to the right technologies at the right time. With over 7,000 pre-built integrations to applications and infrastructure providers, Okta customers can easily and securely use the best technologies for their business. More than 19,300 organizations, including JetBlue, Nordstrom, Slack, T-Mobile, Takeda, Teach for America, and Twilio, trust Okta to help protect the identities of their workforces and customers. Follow Okta Apply

Posted 1 month ago

Apply

5.0 - 8.0 years

2 - 6 Lacs

Mumbai

Work from Office

Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Indi

Work from Office

- Proficient in Python programming. - Experience with Neo4j for graph database management and querying. - Knowledge of cloud platforms including AWS, Azure, and GCP. - Familiarity with Postgres and Clickhouse for database management and optimization. - Understanding of serverless architecture for building and deploying applications. - Experience with Docker for containerization and deployment. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Pune

Work from Office

Responsibilities: Create and optimize complex SPARQL Protocol and RDF Query Language queries to retrieve and analyse data from graph databases. Develop graph-based applications and models to solve real-world problems and extract valuable insights from data. Design, develop, and maintain scalable data pipelines using Python rest apis get data from different cloud platforms. Create and optimize complex SPARQL queries to retrieve and analyze data from graph databases. Study and understand the nodes, edges, and properties in graphs, to represent and store data in relational databases. Qualifications: Strong proficiency in SparQL, and RDF query language, Python and Rest APIs. Experience with database technologies sql and sparql. Preferred Skills: Knowledge of cloud platforms like AWS, Azure, or GCP. Experience with version control systems like Github. Understanding of environments and deployment processes and cloud infrastructure.

Posted 2 months ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Graph Databases Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop innovative solutions and contribute to key decisions. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the application development process.- Implement best practices for application design.- Ensure applications meet business requirements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Graph Databases.- Strong understanding of data modeling.- Experience in application development frameworks.- Knowledge of database management systems.- Hands-on experience in application testing. Additional Information:- The candidate should have a minimum of 12 years of experience in Graph Databases.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

6.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Responsibilities: Azure Data Factory – design ETL processes and create complex pipelines SQL / T-SQL Python XML, Json, Excel, CSV data bricks, data lake, synapse Azure DLS Gen 2 Azure Data Factory working with Graph Databases

Posted 2 months ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Chennai

Remote

modern data warehouse (Snowflake, Big Query, Redshift) and graph databases. designing and building efficient data pipelines for the ingestion and transformation of data into a data warehouse Proficiency in Python, dbt, git, SQL, AWS and Snowflake.

Posted 2 months ago

Apply

3.0 - 5.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Machine Learning & Deep Learning – Strong understanding of LLM architectures, transformers, and fine-tuning techniques. MLOps & DevOps – Experience with CI/CD pipelines, model deployment, and monitoring. Vector Databases – Knowledge of storing and retrieving embeddings efficiently. Prompt Engineering – Ability to craft effective prompts for optimal model responses. Retrieval-Augmented Generation (RAG) – Implementing techniques to enhance LLM outputs with external knowledge. Cloud Platforms – Familiarity with AWS, Azure, or GCP for scalable deployments. Containerization & Orchestration – Using Docker and Kubernetes for model deployment. Observability & Monitoring – Tracking model performance, latency, and drift. Security & Ethics – Ensuring responsible AI practices and data privacy. Programming Skills – Strong proficiency in Python, SQL, and API development. Knowledge of Open-Source LLMs – Familiarity with models like LLaMA, Falcon, and Mistral. Fine-Tuning & Optimization – Experience with LoRA, quantization, and efficient training techniques. LLM Frameworks – Hands-on experience with Hugging Face, LangChain, or OpenAI APIs. Data Engineering – Understanding of ETL pipelines and data preprocessing. Microservices Architecture – Ability to design scalable AI-powered applications. Explainability & Interpretability – Techniques for understanding and debugging LLM outputs. Graph Databases – Knowledge of Neo4j or similar technologies for complex data relationships. Collaboration & Communication – Ability to work with cross-functional teams and explain technical concepts clearly. Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: LLM Ops. Experience3-5 Years.

Posted 2 months ago

Apply

8.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Node.js, Type script, Java script ,Graph SQL, no SQL, Graph Database, react JS, AngularJS, E2E API building database, nodeJS,

Posted 2 months ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Mumbai

Work from Office

About The Role : As a Symfony Developer, you will be an integral part of our backend development team, responsible for building and maintaining robust, scalable web applications. You will work on various aspects of our platform, focusing on creating efficient, high-quality code and implementing features that support the company's innovative building management solutions. Your role will involve close collaboration with front-end developers, product managers, and other stakeholders to ensure seamless integration and optimal performance of the platform. Key Responsibilities : Backend Development :- Design, develop, and maintain scalable web applications using- Symfony.- Write clean, efficient, and well-documented code that adheres to industry best practices.- Implement new features and enhance existing functionalities based on business requirements. API Development and Integration - Develop and maintain RESTful APIs for communication between the frontend and backend.- Integrate third-party services and APIs to extend the functionality of our platform.- Ensure secure and efficient data handling and storage across the system. Database Management - Design and optimize database schemas using MySQL or PostgreSQL.- Write and optimize complex SQL queries for data retrieval and reporting.- Ensure data integrity and performance through effective database management practices.Performance Optimization - Identify and resolve performance bottlenecks in the application to ensure optimal speed and scalability.- Implement caching strategies and other techniques to improve application performance.- Conduct code reviews and refactor existing code to improve maintainability and performance. Collaboration and Communication - Work closely with front-end developers to ensure seamless integration of front-end and back-end components.- Collaborate with product managers, field engineers, and other stakeholders to understand and implement business requirements.- Communicate effectively with team members to align on project goals and deliverables. Testing and Quality Assurance - Write and maintain unit and integration tests to ensure the reliability and stability of the application.- Collaborate with QA teams to troubleshoot and resolve issues during the testing phase.- Maintain a focus on delivering high-quality software that meets both functional and non-functional requirements.Continuous Improvement - Stay up-to-date with the latest- Symfony- and PHP development trends and technologies.- Contribute to the continuous improvement of development processes and tools.- Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives. Qualifications - Bachelor's degree in Computer Science, Information Technology, or a related field.- 3-5+ years of experience in backend development, with a strong focus on- Symfony.- Proficiency in PHP, MySQL/PostgreSQL, and web application development.- Experience with RESTful API design and development.- Proficiency in both Dutch and English, with strong verbal and written communication skills in both languages.- Strong understanding of object-oriented programming (OOP) and design patterns.- Experience with version control systems such as Git and GitLab.- Ability to work independently as well as part of a collaborative team.Preferred Qualifications - Experience with API Platform (https://api-platform.com/) is a strong plus.- Experience with RealEstateCore, MongoDB, GraphQL, and Graph databases is highly desirable.- Experience with messaging systems like RabbitMQ or equivalent systems.- Experience with Docker and containerized environments.- Familiarity with front-end technologies like React or Vue.js is a plus.- Knowledge of cloud platforms like AWS or Azure.- Familiarity with Agile development methodologies.- Experience in the building management or smart building industry.ApplyInsightsFollow-upSave this job for future referenceDid you find something suspiciousReport Here! Hide This JobClick here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

7.0 - 9.0 years

25 - 40 Lacs

Pune

Work from Office

Our world is transforming, and PTC is leading the way.Our software brings the physical and digital worlds together, enabling companies to improve operations, create better products, and empower people in all aspects of their business. Our people make all the difference in our success. Today, we are a global team of nearly 7,000 and our main objective is to create opportunities for our team members to explore, learn, and grow – all while seeing their ideas come to life and celebrating the differences that make us who we are and the work we do possible. : PTC is a dynamic and innovative company dedicated to creating innovative products that transform industries and improve lives. We are looking for a talented Product Architect that will be able to lead the conceptualization and development of groundbreaking products, and leverage the power of cutting edge AI technologies to drive enhanced productivity and innovation. Job Description: Responsibilities: Design and implement scalable, secure, and high-performing Java applications. Focus on designing, building, and maintaining complex, large-scale systems with intrinsic multi-tenant SaaS characteristics. Define architectural standards, best practices, and technical roadmaps. Lead the integration of modern technologies, frameworks, and cloud solutions. Collaborate with DevOps, product teams, and UI/UX designers to ensure cohesive product development. Conduct code reviews, mentor developers, and enforce best coding practices. Stay up-to-date with the latest design patterns, technological trends, and industry best practices. Ensure scalability, performance, and security of product designs. Conduct feasibility studies and risk assessments. Requirements: Proven experience as a Software Solution Architect or similar role. Strong expertise in vector and graph databases (e.g., Pinecone, Chroma DB, Neo4j, ArangoDB, Elastic Search). Extensive experience with content repositories and content management systems. Familiarity with SaaS and microservices implementation models. Proficiency in programming languages such as Java, Python, or C#. Excellent problem-solving skills and ability to think strategically. Strong technical, analytical, communication, interpersonal, and presentation skills. Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience with cloud platforms (e.g., AWS, Azure). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Experience with artificial intelligence (AI) and machine learning (ML) technologies. Benefits: Competitive salary and benefits package. Opportunities for professional growth and development. Collaborative and inclusive work environment. Flexible working hours and hybrid work options. Life at PTC is about more than working with today’s most cutting-edge technologies to transform the physical world. It’s about showing up as you are and working alongside some of today’s most talented industry leaders to transform the world around you. If you share our passion for problem-solving through innovation, you’ll likely become just as passionate about the PTC experience as we are. Are you ready to explore your next career move with us? We respect the privacy rights of individuals and are committed to handling Personal Information responsibly and in accordance with all applicable privacy and data protection laws. Review our Privacy Policy here ."

Posted 2 months ago

Apply

5 - 9 years

7 - 11 Lacs

Kochi, Coimbatore, Thiruvananthapuram

Work from Office

Job Title - Senior Data Engineer (Graph DB specialist)+ Specialist + Global Song Management Level :9,Specialist Location:Kochi, Coimbatore Must have skills: Data Modeling Techniques and Methodologies Good to have skills:Proficiency in Python and PySpark programming. Job Summary :We are seeking a highly skilled Data Engineer with expertise in graph databases to join our dynamic team. The ideal candidate will have a strong background in data engineering, graph querying languages, and data modeling, with a keen interest in leveraging cutting-edge technologies like vector databases and LLMs to drive functional objectives. Your responsibilities will include: Design, implement, and maintain ETL pipelines to prepare data for graph-based structures. Develop and optimize graph database solutions using querying languages such as Cypher, SPARQL, or GQL. Neo4J DB experience is preferred. Build and maintain ontologies and knowledge graphs, ensuring efficient and scalable data modeling. Integrate vector databases and implement similarity search techniques, with a focus on Retrieval-Augmented Generation (RAG) methodologies and GraphRAG. Collaborate with data scientists and engineers to operationalize machine learning models and integrate with graph databases. Work with Large Language Models (LLMs) to achieve functional and business objectives. Ensure data quality, integrity, and security while delivering robust and scalable solutions. Communicate effectively with stakeholders to understand business requirements and deliver solutions that meet objectives. Roles & Responsibilities: Experience:At least 5 years of hands-on experience in data engineering. With 2 years of experience working with Graph DB. Programming: Querying:Advanced knowledge of Cypher, SPARQL, or GQL querying languages. ETL Processes:Expertise in designing and optimizing ETL processes for graph structures. Data Modeling:Strong skills in creating ontologies and knowledge graphs.Presenting data for Graph RAG based solutions Vector Databases:Understanding of similarity search techniques and RAG implementations. LLMs:Experience working with Large Language Models for functional objectives. Communication:Excellent verbal and written communication skills. Cloud Platforms:Experience with Azure analytics platforms, including Function Apps, Logic Apps, and Azure Data Lake Storage (ADLS). Graph Analytics:Familiarity with graph algorithms and analytics. Agile Methodology:Hands-on experience working in Agile teams and processes. Machine Learning:Understanding of machine learning models and their implementation. Professional & Technical Skills: Additional Information: (do not remove the hyperlink) Qualifications Experience: Minimum 5-10 year(s) of experience is required Educational Qualification: Any graduation / BE / B Tech

Posted 2 months ago

Apply

3 - 5 years

6 - 10 Lacs

Gurugram

Work from Office

Position Summary: A Data Engineer designs and maintains scalable data pipelines and storage systems, with a focus on integrating and processing knowledge graph data for semantic insights. They enable efficient data flow, ensure data quality, and support analytics and machine learning by leveraging advanced graph-based technologies. How You"™ll Make an Impact (responsibilities of role) Build and optimize ETL/ELT pipelines for knowledge graphs and other data sources. Design and manage graph databases (e.g., Neo4j, AWS Neptune, ArangoDB). Develop semantic data models using RDF, OWL, and SPARQL. Integrate structured, semi-structured, and unstructured data into knowledge graphs. Ensure data quality, security, and compliance with governance standards. Collaborate with data scientists and architects to support graph-based analytics. What You Bring (required qualifications and skills) Bachelor"™s/master"™s in computer science, Data Science, or related fields. Experience3+ years of experience in data engineering, with knowledge graph expertise. Proficiency in Python, SQL, and graph query languages (SPARQL, Cypher). Experience with graph databases and frameworks (Neo4j, GraphQL, RDF). Knowledge of cloud platforms (AWS, Azure). Strong problem-solving and data modeling skills. Excellent communication skills, with the ability to convey complex concepts to non-technical stakeholders. The ability to work collaboratively in a dynamic team environment across the globe.

Posted 2 months ago

Apply

8 - 13 years

17 - 22 Lacs

Bengaluru

Work from Office

Overview We are seeking a highly skilled and experienced Azure open AI Archirtect to join our growing team. You will play a key role in designing, developing, and implementing Gen AI solutions across various domains, including chatbots. The ideal candidate for this role will have experience with latest natural language processing, generative AI technologies and the ability to produce diverse content such as text, audio, images, or video. You will be responsible for integrating general-purpose AI models into our systems and ensuring they serve a variety of purposes effectively. Task and Responsibilities Collaborate with cross-functional teams to design and implement Gen AI solutions that meet business requirements. Developing, training, testing, and validating the AI system to ensure it meets the required standards and performs as intended. Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI Models, Open Source LLMs ( Llama2 , Mistral ,"), and frameworks like Langchain and Pandas . Leverage expertise in Transformer/Neural Network models and Vector/Graph Databases to build robust and scalable AI systems. Integrate AI models into existing systems to enhance their capabilities. Creating data pipelines to ingest, process, and prepare data for analysis and modeling using Azure services, such as Azure AI Document Intelligence , Azure Databricks etc. Integrate speech-to-text functionality using Azure native services to create user-friendly interfaces for chatbots. Deploying and managing Azure services and resources using Azure DevOps or other deployment tools. Monitoring and troubleshooting deployed solutions to ensure optimal performance and reliability. Ensuring compliance with security and regulatory requirements related to AI solutions. Staying up-to-date with the latest Azure AI technologies and industry developments, and sharing knowledge and best practices with the team. Qualifications Overall 8+ years"™ combined experience in IT and recent 5 years as AI engineer. Bachelor's or master's degree in computer science, information technology, or a related field. Experience in designing, developing, and delivering successful GenAI Solutions. Experience with Azure Cloud Platform and Azure AI services such as Azure AI Search, Azure OpenAI , Document Intelligence, Speech, Vision , etc. Experience with Azure infrastructure and solutioning. Familiarity with OpenAI Models, Open Source LLMs, and Gen AI frameworks like Langchain and Pandas . Solid understanding of Transformer/Neural Network architectures and their application in Gen AI. Hands-on experience with Vector/Graph Databases and their use in semantic & vector search. Proficiency in programming languages like Python (essential). Relevant industry certifications, such as Microsoft CertifiedAzure AI Engineer, Azure Solution Architect etc. is a plus. Excellent problem-solving, analytical, and critical thinking skills. Strong communication and collaboration skills to work effectively in a team environment. A passion for innovation and a desire to push the boundaries of what's possible with Gen AI.

Posted 2 months ago

Apply

3 - 6 years

7 - 11 Lacs

Hyderabad

Work from Office

Sr Semantic Engineer – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Doctorate Degree OR Master’s degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation ExpertiseGood experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-SolvingExcellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

2 - 4 years

4 - 7 Lacs

Hyderabad

Work from Office

Associate Data Engineer Graph – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part of Research’s Semantic Graph. Team is seeking a dedicated and skilled Data Engineer to design, build and maintain solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance, graph-based, data engineering solutions for large scientific datasets and collaborate with Research partners. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, has experience with semantic data modeling and graph databases, and understands data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Contribute to data pipeline projects from inception to deployment, manage scope, timelines, and risks Contribute to data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience: Bachelor’s degree and 1to 3 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience OR Diploma and 4 to 7 years of Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field experience Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Hands on experience with big data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

10 - 15 years

27 - 35 Lacs

Hyderabad

Work from Office

Node.js experience, TypeScript, JavaScript, NoSQL/Graph databases, and API development. Expertise in GraphQL, Docker, cloud deployment (AWS/Azure), (React.js/AngularJS) is required. Knowledge of API security, logging, Koa.js, and build tools is plus

Posted 2 months ago

Apply

12 - 17 years

14 - 19 Lacs

Pune, Bengaluru

Work from Office

Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Manufacturing Operations Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : BTech BE Job Title:Industrial Data Architect Summary :We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyse, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Must have Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life ScienceKey Responsibilities: Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. Focused on designing, building, and managing the data architecture of industrial systems. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. Create scalable and secure data structures, integrating with existing systems and ensuring efficient data flow. Qualifications: Data Modeling and Architecture:oProficiency in data modeling techniques (conceptual, logical, and physical models).oKnowledge of database design principles and normalization.oExperience with data architecture frameworks and methodologies (e.g., TOGAF). Database Technologies:oRelational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.oNoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data.oGraph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework).oQuery Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. Data Integration and ETL (Extract, Transform, Load):oProficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi).oExperience with data integration tools and techniques to consolidate data from various sources. IoT and Industrial Data Systems:oFamiliarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA).oExperience with either of IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core.oExperience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache FlinkoAbility to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow.oUnderstanding of event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ.oExposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge AI/ML, GenAI:oExperience working on data readiness for feeding into AI/ML/GenAI applicationsoExposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. Cloud Platforms:oExperience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). Data Warehousing and BI Tools:oExpertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).oProficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. Data Governance and Security:oUnderstanding of data governance principles, data quality management, and metadata management.oKnowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. Big Data Technologies:oExperience in big data platforms and tools such as Hadoop, Spark, and Apache Kafka.oUnderstanding of distributed computing and data processing frameworks. Excellent Communication:Superior written and verbal communication skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Problem-Solving Acumen:A passion for tackling intricate challenges and devising elegant solutions. Collaborative Spirit:A track record of successful collaboration with cross-functional teams and stakeholders. Certifications:AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Minimum of 14-18 years progressive information technology experience. Qualifications BTech BE

Posted 2 months ago

Apply

7 - 9 years

9 - 13 Lacs

Bengaluru

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Graph Databases Good to have skills : Life Sciences, Autosys Minimum 7.5 year(s) of experience is required Educational Qualification : BE Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, utilizing your expertise in Graph Databases and Life Sciences. Roles & Responsibilities: Assist with the blueprint and design of the data platform components. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Utilize expertise in Graph Databases to design and implement data models. Develop and maintain data pipelines and ETL processes. Ensure data quality and integrity through testing and validation processes. Professional & Technical Skills: Must To Have Skills:Expertise in Graph Databases. Good To Have Skills:Knowledge of Life Sciences. Experience in designing and implementing data models. Proficiency in developing and maintaining data pipelines and ETL processes. Strong understanding of testing and validation processes for ensuring data quality and integrity. Additional Information: The candidate should have a minimum of 7.5 years of experience in Graph Databases. This position is based at our Bengaluru office. Qualification BE

Posted 2 months ago

Apply

5 - 10 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Neo4j Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Facilitate knowledge sharing sessions to enhance team capabilities. Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: Must To Have Skills: Proficiency in Neo4j. Strong understanding of graph database concepts and data modeling. Experience with application development frameworks and methodologies. Familiarity with RESTful APIs and microservices architecture. Ability to troubleshoot and optimize application performance. Additional Information: The candidate should have minimum 5 years of experience in Neo4j. This position is based at our Bengaluru office. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

2 - 7 years

4 - 8 Lacs

Chennai

Work from Office

Overview Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets. Responsibilities Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets. Requirements Java development with hands-on experience in Spring Boot. Strong knowledge of UI frameworks, particularly Angular, for developing dynamic, interactive web applications. Experience with Kubernetes for managing microservices-based applications in a cloud environment. Familiarity with Postgres (relational) and Neo4j (graph database) for managing complex data models. Experience in Meta Data Modeling and designing data structures that support high-performance and scalability. Expertise in Camunda BPMN and business process automation. Experience implementing rules with Drools Rules Engine. Knowledge of Unix/Linux systems for application deployment and management. Experience building data Ingestion Frameworks to process and handle large datasets.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies