Jobs
Interviews

14 Graphdb Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Product Development Engineer at SenecaGlobal IT Services Private Limited, located in Hyderabad, India, your primary responsibility will be to design, develop, and test software products, ensuring they meet business requirements and quality standards. You will guide technical and engineering activities from the initial business requirements stage to final solution delivery, aiming to minimize delivery failures due to technology and engineering risks. Your role will involve promoting a client-first attitude, establishing effective communication channels with clients, and prioritizing client needs to drive the delivery of software products and services. You will plan and implement secured software engineering activities following agile methodologies, manage development iterations, and control scope and requirement changes to ensure project success. To excel in this role, you should have at least 8 years of industry experience in full-stack architecture and distributed systems, with proficiency in Java, REST, Redis, AWS, and Spring Boot. Your enthusiasm for innovation and self-development in backend development, along with your expertise in Restful API, JSON, CI/CD, and relational databases, will be crucial for success. Additionally, you will be expected to stay updated on the latest backend technologies, contribute to the product development process, mentor other developers, and ensure code quality through structured and maintainable coding practices. Experience with performance tuning, profiling, and debugging applications will also be beneficial. Mandatory skills for this role include Java, Spring Boot, unit testing automation, REACT programming, Redis, AWS, microservices architecture, and CI/CD pipeline automation. Familiarity with project management tools like Jira, Confluence, and Slack is essential, while NodeJS programming skills and exposure to the healthcare domain are considered advantageous. To apply for this position, please submit your CV and contact information to india.ta@senecaglobal.com. SenecaGlobal, a global leader in software development and management, offers a dynamic work environment with opportunities for growth and innovation.,

Posted 1 week ago

Apply

8.0 - 15.0 years

0 Lacs

karnataka

On-site

As a Data Science Team Lead at our organization, you will be responsible for leading a team of data scientists, planning data projects, building analytic systems, and managing a team of data scientists and machine learning engineers. Your role will also involve participating in pre-sales activities and developing proposals. With your strong expertise in machine learning, deep learning, NLP, data mining, and information retrieval, you will design, prototype, and build the next-generation analytics engines and services. Your responsibilities will include leading data mining and collection procedures, ensuring data quality and integrity, interpreting and analyzing data problems, conceiving, planning, and prioritizing data projects, building analytic systems and predictive models, testing the performance of data-driven products, visualizing data, creating reports, aligning data projects with organizational goals, understanding business problems, and designing end-to-end analytics use cases. Additionally, you will develop complex models and algorithms to drive innovation throughout the organization, conduct advanced statistical analysis to provide actionable insights, collaborate with model developers to implement scalable solutions, and provide thought leadership by researching best practices and collaborating with industry leaders. To qualify for this role, you should have 8-15 years of experience in a statistical and/or data science role, proven experience as a Data Scientist or similar role, strong organizational and leadership skills, a degree in Computer Science, Data Science, Mathematics, or a similar field, deep knowledge and experience in Large Language Models (LLM), strong knowledge of Generative AI, deep knowledge of machine learning, statistics, optimization, or a related field, experience in Linear and non-Linear Regression models, Classification models, and Unsupervised models, rich experience in NLP techniques, experience in Deep Learning techniques like CNN, RNN, GAN, and Markov, real-time experience with at least one of the machine language software like R, Python, Matlab, good knowledge in Explainable AI, experience working with large datasets, simulation/optimization, and distributed computing tools, excellent written and verbal communication skills, a strong desire to work in cross-functional teams, an attitude to thrive in a fun, fast-paced startup-like environment, and an additional advantage of experience in semi, unstructured databases. Optional qualifications include programming skills in languages like C, C++, Java, or .Net.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You are a highly skilled and experienced Senior Backend Developer with a focus on Python and backend development. In this role, you will be responsible for designing, developing, and maintaining backend applications using Python. Collaborating with cross-functional teams, you will implement RESTful APIs and web services to ensure high-performance and scalable backend systems. Your key responsibilities will include optimizing database performance, working with relational databases such as MySQL and PostgreSQL, and GraphDB like Neo4j. You will also develop and manage orchestration workflows using tools like Apache Airflow, as well as implementing and maintaining CI/CD pipelines for smooth deployments. Collaboration with DevOps teams for infrastructure management will be essential, along with maintaining high-quality documentation and following version control practices. To excel in this role, you must have a minimum of 5-8 years of backend development experience with Python. It would be advantageous to have experience with backend frameworks like Node.js/Typescript and a strong understanding of relational databases with a focus on query optimization. Hands-on experience with GraphDB and familiarity with RESTful APIs, web service design principles, version control tools like Git, CI/CD pipelines, and DevOps practices are also required. Your problem-solving and analytical skills will be put to the test in this role, along with your excellent communication and collaboration abilities for working effectively with cross-functional teams. Adaptability to new technologies and a fast-paced work environment is crucial, and a Bachelor's degree in Computer Science, Engineering, or a related field is preferred. Familiarity with modern frameworks and libraries in Python or Node.js will be beneficial for success in this position. If you believe you are a perfect fit for this role, please send your CV, references, and cover letter to career@e2eresearch.com.,

Posted 1 week ago

Apply

10.0 - 18.0 years

0 Lacs

pune, maharashtra

On-site

You have 10 to 18 years of relevant experience in Data Science. As a Data Scientist, your responsibilities will include modeling and data processing using Scala Spark/PySpark. You should have expert level knowledge of Python for data science purposes. Additionally, you will be required to work on data science concepts, model building using sklearn/PyTorch, and Graph Analytics using networkX, Neo4j, or similar graph databases. Experience in model deployment and monitoring (MLOps) is also desirable. The required skills for this Data Science position include: - Data Science - Python - Scala - Spark/PySpark - MLOps - GraphDB - Neo4j - NetworkX Our hiring process consists of the following steps: 1. Screening (HR Round) 2. Technical Round 1 3. Technical Round 2 4. Final HR Round This position is based in Pune.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Product Development Manager at SenecaGlobal IT Services Private Limited, located in Hyderabad, India, you will be responsible for guiding the technical and engineering activities of projects from business requirements to solution development and delivery. Your primary focus will be on ensuring that client requirements are met, reducing delivery failures, and promoting a client-first attitude. You will be expected to lead the planning and implementation of secured software engineering activities using agile processes, manage development iterations and project scope, control changes, and mitigate technical risks. Additionally, you will oversee the development, integration, and improvement of software solutions, tests, and automation, while ensuring the quality of work products through reviews and testing. To excel in this role, you should have a strong background in full stack architecture and distributed systems, with at least 8 years of experience in Java, REST, Redis, AWS, and Spring Boot. Your enthusiasm for innovation, experimentation, and self-development in backend development will be essential, along with your expertise in Restful API, JSON, CI/CD, and tools like Git, Bitbucket, and Jenkins. Proficiency in relational databases and NoSQL, as well as experience in developing high-scale web applications, will be crucial. Mandatory skills for this role include Java, Spring Boot, unit testing automation, REACT programming, Redis, AWS, microservices architecture, and CI/CD pipeline automation. Knowledge of project management tools such as Jira, Confluence, and Slack is also required. Additionally, experience with Node.js programming and exposure to the healthcare domain would be advantageous. To be considered for this position, you should have 8-10 years of industry experience, along with a degree in BE/B. Tech/M. Tech/MCA. If you are passionate about contributing to all stages of the product development process, staying updated on backend technologies, and mentoring other developers, we encourage you to apply by submitting your CV to india.ta@senecaglobal.com. SenecaGlobal, a global leader in software development and management, offers a dynamic work environment that fosters innovation and collaboration. Our team of skilled professionals is dedicated to delivering high-quality solutions to clients across various industries. Join us in our mission to provide clients with a competitive edge and make a positive impact in the world of technology.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

telangana

On-site

You will be working at TRST01, a leading Sustainable Tech company based in Hyderabad. TRST01 specializes in innovative solutions for Sustainable Supply Chain management, Automated ESG Reporting, and Climate Action measurement through its digital Measurement, Reporting, and Verification (dMRV) Solution. By leveraging decentralized technology and blockchain, TRST01 delivers reliable data to assist businesses in achieving their ESG goals effectively. As a Data Modeler at TRST01, you will play a crucial role in designing, implementing, and optimizing data models that support carbon accounting, lifecycle analysis, and sustainability reporting. Your responsibilities will include collaborating with cross-functional teams, such as data scientists, software engineers, and sustainability experts, to ensure the integrity, accuracy, and scalability of environmental data models. Key Responsibilities: - Design and develop robust data models for tracking carbon emissions, energy consumption, and sustainability metrics. - Utilize CFRD datasets to establish reliable and scalable data pipelines for climate impact analysis. - Develop entity-relationship diagrams (ERDs) and schema designs to optimize storage and retrieval of climate-related data. - Collaborate with data engineers and scientists to integrate climate and sustainability data into existing platforms. - Implement data validation and quality control measures to ensure accuracy in sustainability reporting. - Support the development of AI/ML models for predictive analysis of carbon reduction strategies. - Ensure compliance with global environmental regulations (such as GHG Protocol, CSRD, and TCFD) in data modeling practices. - Optimize data models for real-time and batch processing of sustainability metrics. - Work with business intelligence teams to develop dashboards and reports based on modeled climate data. - Stay updated with the latest advancements in climate tech, data modeling, and carbon accounting methodologies. Required Qualifications: - Bachelor's or Master's degree in Data Science, Computer Science, Environmental Science, Sustainability, or a related field. - 1-3+ years of experience in data modeling, database design, and schema optimization. - Expertise in CFRD (Carbon Footprint and Reduction Data) and related frameworks. - Strong understanding of relational and non-relational databases (SQL, NoSQL, GraphDB, etc.). - Hands-on experience with big data tools (e.g., Apache Spark, Hadoop) and ETL pipelines. - Proficiency in data modeling tools such as Erwin, Lucidchart, or similar. - Experience working with climate datasets (e.g., satellite imagery, emission inventories, LCA data). - Familiarity with carbon accounting standards like GHG Protocol, SBTi, and ISO 14064. - Strong analytical and problem-solving skills. - Excellent communication and collaboration abilities. Preferred Qualifications: - Experience in cloud-based data solutions (AWS, Azure) for sustainability analytics. - Exposure to machine learning models for climate risk assessment. - Familiarity with GIS-based modeling for environmental impact analysis.,

Posted 2 weeks ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

This is a full-time position with D Square Consulting Services Pvt Ltd. As a Senior Data Architect / Modeler, you will collaborate closely with various business partners, product owners, data strategy, data platform, data science, and machine learning teams to innovate data products for end users. Your role will involve shaping the overall solution architecture and defining models for data products using best-in-class engineering practices. By working with stakeholders, you will comprehend business requirements and design/build data models that support acquisition, ingestion processes, and critical reporting and insight needs. To be successful in this role, you must have a minimum of 12 years of experience, with at least 7 years in Data & Analytics initiatives. You should possess a deep understanding of how data architecture and modeling facilitate data pipelines, data management, and analytics. Additionally, you need 5+ years of experience in data architecture & modeling within Consumer/Healthcare Goods industries and hands-on experience in Cloud Architecture (Azure, GCP, AWS) and related databases like Synapse, Databricks, Snowflake, and Redshift. Proficiency in SQL and familiarity with data modeling tools like Erwin or ER Studio is crucial. Your responsibilities will include leading data architecture and modeling efforts in collaboration with engineering and platform teams to develop next-generation product capabilities that drive business growth. You will focus on delivering reliable, high-quality data products to maximize business value and work within the DevSecOps framework to enhance data & analytics capabilities. Collaborating with Business Analytics leaders, you will translate business needs into optimal architecture designs and design scalable and reusable models for various functional areas of data products while adhering to FAIR principles. In this role, you will also collaborate with data engineers, solution architects, and other stakeholders to maintain and optimize data models. You will establish trusted partnerships with Data Engineering, Platforms, and Data Science teams to create business-relevant data models and ensure the maintenance of Metadata Rules, Data Dictionaries, and associated lineage details. Additionally, staying updated with emerging technologies and mentoring other data modelers in the team will be a part of your responsibilities. Qualifications for this position include an undergraduate degree in Technology, Computer Science, Applied Data Sciences, or related fields, with an advanced degree being preferred. Experience in NoSQL and graphDB databases, as well as hands-on experience with data catalogs like Alation, Collibra, or similar tools, is beneficial. You should have a strong ability to challenge existing technologies and architecture while effectively influencing across the organization. Lastly, experience in a diverse company culture and a commitment to inclusion and equal-opportunity employment are desired traits for this role.,

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Working knowledge of Azure AI Working knowledge of LLM , RAG- Data insights , Reasoning Agent , Chain of thought Knowledge of Graph DB , COSMOS DB , Neo4j Prompt engineering API building Domain knowledge in Finance , data management Should have worked on enterprise AI use cases and one go live solution.

Posted 1 month ago

Apply

4.0 - 6.0 years

7 - 10 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will be part of Researchs Semantic Graph Team is seeking a dedicated and skilled Semantic Data Engineer to build and optimize knowledge graph-based software and data resources. This role primarily focuses on working with technologies such as RDF, SPARQL, and Python. In addition, the position involves semantic data integration and cloud-based data engineering. The ideal candidate should possess experience in the pharmaceutical or biotech industry, demonstrate deep technical skills, and be proficient with big data technologies and demonstrate experience in semantic modeling. A deep understanding of data architecture and ETL processes is also essential for this role. In this role, you will be responsible for constructing semantic data pipelines, integrating both relational and graph-based data sources, ensuring seamless data interoperability, and leveraging cloud platforms to scale data solutions effectively. Roles & Responsibilities: Develop and maintain semantic data pipelines using Python, RDF, SPARQL, and linked data technologies. Develop and maintain semantic data models for biopharma scientific data Integrate relational databases (SQL, PostgreSQL, MySQL, Oracle, etc.) with semantic frameworks. Ensure interoperability across federated data sources, linking relational and graph-based data. Implement and optimize CI/CD pipelines using GitLab and AWS. Leverage cloud services (AWS Lambda, S3, Databricks, etc.) to support scalable knowledge graph solutions. Collaborate with global multi-functional teams, including research scientists, Data Architects, Business SMEs, Software Engineers, and Data Scientists to understand data requirements, design solutions, and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions. Collaborate with data scientists, engineers, and domain experts to improve research data accessibility. Adhere to standard processes for coding, testing, and designing reusable code/components. Explore new tools and technologies to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Maintain comprehensive documentation of processes, systems, and solutions. Harmonize research data to appropriate taxonomies, ontologies, and controlled vocabularies for context and reference knowledge. Basic Qualifications and Experience: Doctorate Degree OR Masters degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelors degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 6+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Advanced Semantic and Relational Data Skills: Proficiency in Python, RDF, SPARQL, Graph Databases (e.g. Allegrograph), SQL, relational databases, ETL pipelines, big data technologies (e.g. Databricks), semantic data standards (OWL, W3C, FAIR principles), ontology development and semantic modeling practices. Cloud and Automation Expertise: Good experience in using cloud platforms (preferably AWS) for data engineering, along with Python for automation, data federation techniques, and model-driven architecture for scalable solutions. Technical Problem-Solving: Excellent problem-solving skills with hands-on experience in test automation frameworks (pytest), scripting tasks, and handling large, complex datasets. Good-to-Have Skills: Experience in biotech/drug discovery data engineering Experience applying knowledge graphs, taxonomy and ontology concepts in life sciences and chemistry domains Experience with graph databases (Allegrograph, Neo4j, GraphDB, Amazon Neptune) Familiarity with Cypher, GraphQL, or other graph query languages Experience with big data tools (e.g. Databricks) Experience in biomedical or life sciences research data management Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 1 month ago

Apply

4.0 - 6.0 years

3 - 6 Lacs

Chennai, Tamil Nadu, India

On-site

Essential Skills/Experience: Hands-on experience with Neo4j and Cypher query development. Solid grounding in RDF, OWL, SHACL, SPARQL, and semantic modeling standard methodologies. Strong proficiency in Python (or an equivalent language) for automation, data transformation, and pipeline integration. Demonstrated ability to define use cases, structure delivery backlogs, and manage technical execution. Strong problem-solving and communication skills, with a delivery-focused mindset. Bachelor s degree in Computer Science, Data Science, Information Systems, or a related field (Master s preferred). Desirable Skills/Experience: Experience with additional graph platforms such as GraphDB, Stardog, or Amazon Neptune. Familiarity with Cognite Data Fusion, IoT/industrial data integration, or other large-scale operational data platforms. Understanding of knowledge representation techniques and reasoning systems. Exposure to AI/ML approaches using graphs or semantic features. Knowledge of tools such as Prot g , TopBraid Composer, or VocBench. Familiarity with metadata standards, data governance, and FAIR principles.

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

The Team The Okta platform provides directory services, single sign-on, strong authentication, provisioning, workflow, and built in reporting. It runs in the cloud on a secure, reliable, extensively audited platform and integrates deeply with on premises applications, directories, and identity management systems. We are looking for an experienced Staff Software Engineer to work on our Onboarding and Lifecycle Management (LCM) Platform team with focus on enhancing and managing services for importing, syncing and provisioning identities and access policies i.e., users, groups, roles, entitlements, etc. These features allow customers the flexibility to link and enhance their business processes with Okta s identity management product. This role is to build, design solutions, and maintain our platform for scale. The ideal candidate is someone who has experience building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure. Job Duties And Responsibilities Work with senior engineering team in major development projects, design and implementation Be a key contributor in the implementation of the LCM infrastructure Troubleshooting customer issues and debugging from logs (Splunk, Syslogs, etc.) Design & Implement features with functional and unit tests along with monitoring and alerts Conduct design & code reviews, analysis and performance tuning Quick prototyping to validate scale and performance Provide technical leadership and mentorship to more junior engineers Interface with Architects, QA, Product Owners, Engineering Services, Tech Ops Partner with our Product Development, QA, and Site Reliability Engineering teams for scoping the development and deployment work Required Knowledge, Skills, And Abilities The ideal candidate is someone who is experienced building software systems to manage and deploy reliable and performant infrastructure and product code at scale on a cloud infrastructure 7+ years of Software Development in Java, preferably significant experiences with Hibernate and Spring Boot 5+ years of development experience building services, internal tools and frameworks 2+ years experience automating and deploying large scale production services in AWS, GCP or similar Deep understanding of infrastructure level technologies: caching, stream processing, resilient architectures Experience working with relational databases, ideally MySQL, PostgreSQL or GraphDB Ability to work effectively with distributed teams and people of various backgrounds Lead and mentor junior engineers Education B.S. Computer Science or equivalent#LI-MM5

Posted 1 month ago

Apply

1.0 - 4.0 years

2 - 6 Lacs

Mumbai, Pune, Chennai

Work from Office

Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad

Posted 1 month ago

Apply

4.0 - 6.0 years

4 - 6 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Roles & Responsibilities: Lead conversations with business collaborators to elucidate semantic models of pharmaceutical business concepts, aligned definitions, and relationships. Negotiate and debate across collaborators to drive alignment and create system-independent information models, taking a data-centric approach aligned with business data domains. Develop comprehensive business information models and ontologies that capture industry-specific concepts, including CMC, Clinical, and Operations data. Facilitate whiteboarding sessions with business subject matter experts to elicit knowledge, drive interoperability across pharmaceutical domains, and interface between data producers and consumers. Educate peers on the practical use and differentiating value of Linked Data and FAIR+ data principles. Champion standards for master data & reference data. Formalize data models in RDF as OWL and SHACL ontologies that interoperate with each other and with relevant industry standards like FHIR and IDMP for healthcare data exchange. Build a broad semantic knowledge graph that threads data together across end-to-end business processes and enables the transformation to data-centricity and new ways of working Apply pragmatic semantic abstraction to simplify diverse pharmaceutical and healthcare data patterns effectively. Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Data Science experience OR Bachelors degree and 6 to 8 years of Data Science experience OR Diploma and 10 to 12 years of Data Science experience Preferred Qualifications: About the role You will play a key role in a regulatory submission content automation initiative which will modernize and digitize the regulatory submission process, positioning Amgen as a leader in regulatory innovation. The initiative uses state-of-the-art technologies, including Generative AI, Structured Content Management, and integrated data to automate the creation, review, and approval of regulatory content. Role Description: The Sr Data Scientist is responsible for developing interconnected business information models and ontologies that capture real-world meaning of data by studying the business, our data, and the industry. With a focus on pharmaceutical industry-specific data, including Clinical, Operations, and Chemistry, Manufacturing, and Controls (CMC), this role involves creating robust semantic models based on data-centric principles to realize a connected data ecosystem that empowers consumers. The Information Modeler drives seamless cross-functional data interoperability, enables efficient decision-making, and supports digital transformation in pharmaceutical operations. Functional Skills: Must-Have Skills: Proven ability to lead and develop successful teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Deep understanding of pharmaceutical industry data, including CMC, Process Development, Manufacturing, Engineering Quality, Supply Chain, and Operations. Advanced skills in semantic modeling, RDF, OWL, SHACL, and ontology development in TopBraid and/or Protg. Demonstrated experience creating knowledge graphs with semantic RDF technologies (e.g. Stardog, AllegroGraph, GraphDB, Neptune) and testing models with real data. Highly proficient with RDF, SPARQL, Linked Data concepts, and interacting with triple stores. Highly proficient at facilitating, capturing, and organizing collaborative discussions through tools such as Miro, Lucidspark, Lucidchart, and Confluence. Expertise in FAIR data principles and their application in healthcare and pharmaceutical data models. Good-to-Have Skills: Experience in regulatory data modeling and compliance requirements in the pharmaceutical domain. Familiarity with pharmaceutical lifecycle data (PLM), including product development and regulatory submissions. Knowledge of supply chain and operations data modeling in the pharmaceutical industry. Proficiency in integrating data from various sources, such as LIMS, EDC systems, and MES. Hands-on data analysis and wrangling experience including SQL-based data transformation and solving integration challenges arising from differences in data structure, meaning, or terminology Expertise in FHIR data standards and their application in healthcare and pharmaceutical data models. Soft Skills: Exceptional interpersonal, business analysis, facilitation, and communication skills. Ability to interpret complex regulatory and operational requirements into data models. Analytical thinking for problem-solving in a highly regulated environment. Adaptability to manage and prioritize multiple projects in a dynamic setting. Strong appreciation for customer- and user-centric product design thinking.

Posted 2 months ago

Apply

7.0 - 10.0 years

16 - 30 Lacs

Noida, Lucknow

Work from Office

Hi We have one urgent requirement for Python Developer roles who can Join in max 30 Days About HCL Software HCLSoftware, a division of HCLTech, develops, markets, sells, and supports software for AI and Automation, Data, Analytics and Insights, Digital Transformation, and Enterprise Security. HCLSoftware is the cloud-native solution factory for enterprise software and powers millions of apps at more than 20,000 organizations, including more than half of the Fortune 1000 and Global 2000 companies. HCLSoftware's mission is to drive ultimate customer success through relentless product innovation. Website : hcl-software.com Please also find below JD Job Title: Python Developer Location: Noida Experience Required: 7 to 10 years Notice Period: Serving and join within 2-4 weeks (Early joiners only) Job Description: We are seeking an experienced Python Developer with a strong background in application and product development to join our team in Noida. The ideal candidate will have extensive experience in Python programming , with a focus on building robust and scalable applications. We are specifically looking for professionals who have been involved in full-cycle product development , rather than those whose experience is limited to writing scripts for testing or automation purposes. Key Responsibilities: Design, develop, and maintain Python-based applications and products. Collaborate with cross-functional teams to define, design, and ship new features. Ensure the performance, quality, and responsiveness of applications. Identify and correct bottlenecks and fix bugs. Help maintain code quality, organization, and automation. Participate in code reviews. Work with other developers, designers, and stakeholders to build high-quality, innovative, and fully performing software. Requirements: 7 to 10 years of proven experience in Python development. Strong understanding of Python programming and application development . Hands-on experience in full-cycle application/product development and Agile/Scrum development methodology. Solid understanding of software development principles, algorithms, and data structures. Experience with Python frameworks such as Django, Flask, or FastAPI. Proficient understanding of code versioning tools like Git . Familiarity with databases (GraphDB / Neo4j) and cloud services is a plus. Experience with deploying applications in cloud environments (AWS, Azure, or GCP). Hands-on experience with containerization technologies like Docker. Familiarity with orchestration tools such as Kubernetes for deploying and managing microservices. Excellent problem-solving skills and attention to detail. Good communication and collaboration skills. Thanks & Regards Syed Hasan Abbas (He/Him) Senior Executive HR | HCL –Software || TAG LinkedIn - www.linkedin.com/in/hasan-abbas

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies