Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 7 years
0 - 0 Lacs
Chennai
Work from Office
Job Title: Lead I - Software Engineering Hiring Location: Mumbai/Chennai/Gurgaon Job Summary: We are seeking a Lead I in Software Engineering with 4 to 7 years of experience in software development or software architecture. The ideal candidate will possess a strong background in Angular and Java, with the ability to lead a team and drive technical projects. A Bachelor's degree in Engineering or Computer Science, or equivalent experience, is required. Responsibilities: Interact with technical personnel and team members to finalize requirements. Write and review detailed specifications for the development of system components of moderate complexity. Collaborate with QA and development team members to translate product requirements into software designs. Implement development processes, coding best practices, and conduct code reviews. Operate in various development environments (Agile, Waterfall) while collaborating with key stakeholders. Resolve technical issues as necessary. Perform all other duties as assigned. Must-Have Skills: Strong proficiency in Angular 1.X (70% Angular and 30% Java OR 50% Angular and 50% Java). Java/J2EE; Familiarity with Singleton and MVC design patterns. Strong proficiency in SQL and/or MySQL, including optimization techniques (at least MySQL). Experience using tools such as Eclipse, GIT, Postman, JIRA, and Confluence. Knowledge of test-driven development. Solid understanding of object-oriented programming. Good-to-Have Skills: Expertise in Spring Boot, Microservices, and API development. Familiarity with OAuth2.0 patterns (experience with at least 2 patterns). Knowledge of Graph Databases (e.g., Neo4J, Apache Tinkerpop, Gremlin). Experience with Kafka messaging. Familiarity with Docker, Kubernetes, and cloud development. Experience with CI/CD tools like Jenkins and GitHub Actions. Knowledge of industry-wide technology trends and best practices. Experience Range: 4 to 7 years of relevant experience in software development or software architecture. Education: Bachelor's degree in Engineering, Computer Science, or equivalent experience. Additional Information: Strong communication skills, both oral and written. Ability to interface competently with internal and external technology resources. Advanced knowledge of software development methodologies (Agile, etc.). Experience in setting up and maintaining distributed applications in Unix/Linux environments. Ability to complete complex bug fixes and support production issues. Required Skills Angular 1.X,Java 11+,Sql
Posted 2 months ago
3 - 6 years
10 - 20 Lacs
Pune
Remote
Rudder Analytics is looking for Data Engineers (Data ETL/Talend/DB/Cloud) at Pune, with 3-6 yrs of experience. Informatica experience will not be considered for this role. Please see details at https://bit.ly/3ZLheEo for job code ED-SA-01 Required Candidate profile Ability to lead a team and manage projects independently. Eye for detail and great problem-solving skill. Ability to thrive in a fast paced and demanding environment of a start-up.
Posted 2 months ago
3 - 7 years
4 - 7 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly
Posted 2 months ago
4 - 6 years
4 - 7 Lacs
Hyderabad
Work from Office
Senior Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a solid background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. . Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Lead the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Good understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Good understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
12 - 17 years
14 - 19 Lacs
Pune, Bengaluru
Work from Office
Project Role : Application Architect Project Role Description : Provide functional and/or technical expertise to plan, analyze, define and support the delivery of future functional and technical capabilities for an application or group of applications. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Must have skills : Manufacturing Operations Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : BTech BE Job Title:Industrial Data Architect Summary :We are seeking a highly skilled and experienced Industrial Data Architect with a proven track record in providing functional and/or technical expertise to plan, analyse, define and support the delivery of future functional and technical capabilities for an application or group of applications. Well versed with OT data quality, Data modelling, data governance, data contextualization, database design, and data warehousing. Must have Skills:Domain knowledge in areas of Manufacturing IT OT in one or more of the following verticals Automotive, Discrete Manufacturing, Consumer Packaged Goods, Life ScienceKey Responsibilities: Industrial Data Architect will be responsible for developing and overseeing the industrial data architecture strategies to support advanced data analytics, business intelligence, and machine learning initiatives. This role involves collaborating with various teams to design and implement efficient, scalable, and secure data solutions for industrial operations. Focused on designing, building, and managing the data architecture of industrial systems. Assist in facilitating impact assessment efforts and in producing and reviewing estimates for client work requests. Own the offerings and assets on key components of data supply chain, data governance, curation, data quality and master data management, data integration, data replication, data virtualization. Create scalable and secure data structures, integrating with existing systems and ensuring efficient data flow. Qualifications: Data Modeling and Architecture:oProficiency in data modeling techniques (conceptual, logical, and physical models).oKnowledge of database design principles and normalization.oExperience with data architecture frameworks and methodologies (e.g., TOGAF). Database Technologies:oRelational Databases:Expertise in SQL databases such as MySQL, PostgreSQL, Oracle, and Microsoft SQL Server.oNoSQL Databases:Experience with at least one of the NoSQL databases like MongoDB, Cassandra, and Couchbase for handling unstructured data.oGraph Databases:Proficiency with at least one of the graph databases such as Neo4j, Amazon Neptune, or ArangoDB. Understanding of graph data models, including property graphs and RDF (Resource Description Framework).oQuery Languages:Experience with at least one of the query languages like Cypher (Neo4j), SPARQL (RDF), or Gremlin (Apache TinkerPop). Familiarity with ontologies, RDF Schema, and OWL (Web Ontology Language). Exposure to semantic web technologies and standards. Data Integration and ETL (Extract, Transform, Load):oProficiency in ETL tools and processes (e.g., Talend, Informatica, Apache NiFi).oExperience with data integration tools and techniques to consolidate data from various sources. IoT and Industrial Data Systems:oFamiliarity with Industrial Internet of Things (IIoT) platforms and protocols (e.g., MQTT, OPC UA).oExperience with either of IoT data platforms like AWS IoT, Azure IoT Hub, and Google Cloud IoT Core.oExperience working with one or more of Streaming data platforms like Apache Kafka, Amazon Kinesis, Apache FlinkoAbility to design and implement real-time data pipelines. Familiarity with processing frameworks such as Apache Storm, Spark Streaming, or Google Cloud Dataflow.oUnderstanding of event-driven design patterns and practices. Experience with message brokers like RabbitMQ or ActiveMQ.oExposure to the edge computing platforms like AWS IoT Greengrass or Azure IoT Edge AI/ML, GenAI:oExperience working on data readiness for feeding into AI/ML/GenAI applicationsoExposure to machine learning frameworks such as TensorFlow, PyTorch, or Keras. Cloud Platforms:oExperience with cloud data services from at least one of the providers like AWS (Amazon Redshift, AWS Glue), Microsoft Azure (Azure SQL Database, Azure Data Factory), and Google Cloud Platform (BigQuery, Dataflow). Data Warehousing and BI Tools:oExpertise in data warehousing solutions (e.g., Snowflake, Amazon Redshift, Google BigQuery).oProficiency with Business Intelligence (BI) tools such as Tableau, Power BI, and QlikView. Data Governance and Security:oUnderstanding of data governance principles, data quality management, and metadata management.oKnowledge of data security best practices, compliance standards (e.g., GDPR, HIPAA), and data masking techniques. Big Data Technologies:oExperience in big data platforms and tools such as Hadoop, Spark, and Apache Kafka.oUnderstanding of distributed computing and data processing frameworks. Excellent Communication:Superior written and verbal communication skills, with the ability to effectively articulate complex technical concepts to diverse audiences. Problem-Solving Acumen:A passion for tackling intricate challenges and devising elegant solutions. Collaborative Spirit:A track record of successful collaboration with cross-functional teams and stakeholders. Certifications:AWS Certified Data Engineer Associate / Microsoft Certified:Azure Data Engineer Associate / Google Cloud Certified Professional Data Engineer certification is mandatory Minimum of 14-18 years progressive information technology experience. Qualifications BTech BE
Posted 2 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Neo4j Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process Implement Neo4j database solutions Optimize application performance Professional & Technical Skills: Must To Have Skills:Proficiency in Neo4j Strong understanding of graph databases Experience with data modeling Knowledge of Cypher query language Hands-on experience in application development using Neo4j Additional Information: The candidate should have a minimum of 5 years of experience in Neo4j This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education
Posted 2 months ago
3 - 8 years
0 - 0 Lacs
Chennai, Bengaluru, Kolkata
Hybrid
Risk Data Engineer/Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage Support and enhancement of data ingestion infrastructure and pipelines. Designing and implementing data pipelines that will collect data from disparate sources across enterprise, and from external sources and deliver it to our data platform. Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service and customer along said data flow. Identifying and onboarding data sources using existing schemas and where required, conduction exploratory data analysis to investigate and provide solutions. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have skills. 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (DB: PLSQL) At least 4+ years of experience in Database Design and Dimension modelling using Oracle PLSQL. Should be experience of working PLSQL advanced concepts like ( Materialized views, Global temporary tables, Partitions, PLSQL Packages) Experience in SQL tuning, Tuning of PLSQL solutions, Physical optimization of databases. Experience in writing and tuning SQL scripts including- tables, views, indexes and Complex PLSQL objects including procedures, functions, triggers and packages in Oracle Database 11g or higher. Experience in developing ETL processes ETL control tables, error logging, auditing, data quality etc. Should be able to implement reusability, parameterization workflow design etc. Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong understanding of ETL methodologies and best practices. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Experience of working in Financial Crime, Financial Risk and Compliance technology transformation domains. Certification on any cloud tech stack. Experience building and optimizing data pipelines on AWS glue or Oracle cloud. Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence and data ingestion pipelines for AI/ML use cases. Experience with data visualization (Power BI/Tableau) and SSRS.
Posted 2 months ago
3 - 6 years
0 - 0 Lacs
Chennai, Bengaluru, Kolkata
Hybrid
Risk Data Engineer/Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop a thorough understanding of the data science lifecycle, including data exploration, preprocessing, modelling, validation, and deployment Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have skills. Significant data analysis experience using python, SQL and spark. Experience in python or spark to write scripts for data transformation, integration and automation tasks. 3-6 years experience with cloud ML (AWS) or any similar tools. Design, build, and maintain tree-based predictive models, such as decision trees, random forests, and gradient-boosted trees, with a low-level understanding of their algorithms and functioning. Strong Experience with statistical analytical techniques, data mining and predictive models. Conduct A/B testing and other model validation techniques to ensure the accuracy and reliability of data models.. Experience with optimization modelling, machine learning, forecasting and/or natural language processing. Hands-on experience with Amazon S3 data storage, data lifecycle policies, and integration with other AWS services. Maintain, optimize, and scale AWS Redshift clusters to ensure efficient data storage, retrieval, and query performance Utilize Amazon S3 to store raw data, manage large datasets, and integrate with other AWS services to ensure secure, scalable, and cost-effective data solutions. Experience in implementing CI/CD Pipelines in AWS. At least 4+ years of experience in Database Design and Dimension modelling using SQL Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong communication skills to effectively collaborate with team members and present findings to stakeholders. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Apply domain knowledge (if applicable) in financial fraud to enhance predictive modelling and anomaly detection capabilities. Knowledge of AWS IAM for managing secure access to data resources. Familiarity with DevOps practices and automation tools like Terraform or CloudFormation. Experience with data visualization tools like Quick Sight or integrating Redshift data with BI tools (Tableau, PowerBI, etc.). AWS certifications such as AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect are a plus.
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Salem
Work from Office
Azure Data Factory ETL Consultant - Pharma exp MUST Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Kota
Work from Office
Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Nasik
Work from Office
Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins / Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Agra
Work from Office
Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins / Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Ludhiana
Work from Office
Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred. Pharma exp MUST
Posted 2 months ago
3 - 5 years
10 - 14 Lacs
Hyderabad
Work from Office
At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What you will do Let’s do this. Let’s change the world. In this vital role you will will be at the forefront of innovation, using their skills to design and implement pioneering AI/Gen AI solutions. With an emphasis on creativity, collaboration, and technical excellence, this role provides a unique opportunity to work on ground-breaking projects that enhance operational efficiency at the Amgen Technology and Innovation Centre while ensuring the protection of critical systems and data. Roles & Responsibilities: Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI API, Open Source LLMs (Llama2, Mistral, Mixtral), and frameworks like Langchain and Haystack. Design and implement AI & GenAI solutions that drive productivity across all roles in the software development lifecycle. Demonstrate the ability to rapidly learn the latest technologies and develop a vision to embed the solution to improve the operational efficiency within a product team Collaborate with multi-functional teams (product, engineering, design) to set project goals, identify use cases, and ensure seamless integration of Gen AI solutions into current workflows. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 1 to 3 years of Programming Languages such as Java and Python experience OR Bachelor’s degree and 3 to 5 years of Programming Languages such as Java and Python experience OR Diploma and 7 to 9 years of Programming Languages such as Jav and Python experience Preferred Qualifications: Proficiency in programming languages such as Python and Java. Leverage advanced knowledge of Python open-source software stack such as Django or Flask, Django Rest or FastAPI, etc. Experience working with RAG technologies and LLM frameworks, LLM model registries (Hugging Face), LLM APIs, embedding models, and vector databases Familiarity with cloud security (AWS /Azure/ GCP) Utilize expertise in integrating and demonstrating Gen AI LLMs to maximize operational efficiency.Productivity Tools and Technology Engineer Good-to-Have Skills: Experience with graph databases (Neo4J and Cypher would be a big plus) Experience with Prompt Engineering and familiarity with frameworks such as Dspy would be a big plus Professional Certifications: AWS / GCP / Databricks Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 2 months ago
3 - 5 years
8 - 14 Lacs
Bengaluru
Work from Office
Artificial Intelligence/Machine Learning Engineer - Neo4j/Deep Learning Role : AI/ML Engineer Location : Bangalore Experience : 3-4 Years Joining : Immediate (Preferred) Key Responsibilities : - Design, develop, and deploy AI/ML models for data-driven applications. - Work with Neo4j (Graph Database) to model and analyze complex relationships in datasets. - Implement machine learning algorithms for recommendation systems, predictive analytics, and knowledge graphs. - Develop and optimize graph-based data pipelines and algorithms using Neo4j Cypher queries. - Process, clean, and analyze large datasets for feature engineering and model training. - Integrate ML models into scalable applications using Python, TensorFlow, PyTorch, or Scikit-learn. - Optimize model performance, accuracy, and scalability for production deployment. - Collaborate with data scientists, backend engineers, and product teams to enhance AI capabilities. Required Skills : - Machine Learning & Deep Learning : Hands-on with ML models, classification, clustering, NLP. - Neo4j & Graph Databases : Experience in graph data modeling, Cypher queries, and data relationships. - Programming : Proficiency in Python, TensorFlow, PyTorch, Scikit-learn. - Data Engineering : Experience with ETL pipelines, data preprocessing, and feature engineering. - Big Data & Cloud : Exposure to Spark, Kafka, AWS/GCP/Azure (Good to have). - MLOps & Deployment : Working knowledge of Docker, Kubernetes, CI/CD pipelines. - Strong Analytical & Problem-Solving Skills for AI-driven applications. Good to Have : - Experience with Graph Neural Networks (GNNs) and Graph Data Science. - Knowledge of NLP, Computer Vision, or Time-Series Analysis. - Exposure to automated ML pipelines and model monitoring.
Posted 2 months ago
2 - 7 years
8 - 15 Lacs
Bengaluru
Work from Office
Responsibilities 5+ years working experience Working knowledge with project experience of Graph DB.(Neo4J or TigerDB or ArangoDB) Background of creating & maintaining Data Pipeline using spark/Pyspark Working with staged environments (ie. development, integration/testing and production). Usage of at least one NoSQL database engine, relational database technology Domain- Bill Of Material( BOM) knowledge is a plus Technical and Professional Requirements: Mandatory :Graph DB.(Neo4J or TigerDB or ArangoDB) 3-5 yrs experience Spark 3-5 yrs experience Required NoSQL 2-5 yrs exp Pyspark 1 yrs exp Data Science 1 yr exp Nice to Have :Oracle Pl SQl Python Domain- Bill Of Material( BOM) knowledge Preferred Skills: Technology->Big Data - NoSQL->Graph Databases Technology->Big Data->Neo4j Additional Responsibilities: Mandatory :Graph DB.(Neo4J or TigerDB or ArangoDB) 3-5 yrs experienceSpark 3-5 yrs experience Required NoSQL 2-5 yrs exp Pyspark 1 yrs exp Data Science 1 yr exp Nice to Have :Oracle Pl SQlPythonDomain- Bill Of Material( BOM) knowledge Educational Requirements MBA,MSc,MTech,Bachelor of Engineering,BCA,BE,BSc,BTech Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 2 months ago
8 - 11 years
27 - 30 Lacs
Hyderabad
Work from Office
Position Overview: The job profile for this position is Software Engineering Associate Advisor. Excited to grow your career? The Software Engineering Associate Advisor will be responsible for supporting software development across multiple teams utilizing appropriate design methodologies. Possesses knowledge of activities, tasks, practices and deliverables for assessing and documenting business opportunities and the benefits, risks, and success factors of potential applications. Supports to convert business requirements and logical models into technical application design using application design activities, tools and techniques. Executes on application testing strategies, and tactics to ensure software quality throughout all stages of application development. Responsibilities: Works within an agile team to develop, test, and maintain business applications built on Salesforce technologies Reads user stories and develops solutions to simple design problems Prepares necessary reports, manuals and other documentation as needed Designs, develops, and unit tests applications in accordance with established standards Participates in peer-reviews of solution designs and related code and configurations Supports packaging and deployment of releases Develops, refines, and tunes integrations between applications Analyzes and resolves technical and application problems Adheres to high-quality development principles while delivering solutions on-time and on-budget Provides third-level support to business users Monitors the performance of internal systems Attends scrum ceremony meetings and design sessions. Qualifications Required Skills: Proven knowledge of basic coding languages such as Java , Java, Spring Boot, React JS Graph DB - Neptune, Neo4J AWS Practitioner Certification is required Non SQL DB's - MongoDB implementation experience is required Graph DB - Neptune Or Neo4J implementations experience is required Solid understanding of object-oriented programming concepts Solid understanding of relational database design and querying concepts Detail-oriented and goal-oriented Proficient in Microsoft Office Able to adapt to a fast-paced work environment Able to work independently and as part of a team Familiarity with version control concepts Ideally, exposure to Salesforce and Cloud technologies Knowledge of agile development methodologies Knowledge of unit testing Required Experience & Education: College or University degree in Computer Science or a related discipline 8 to 11 years of work experience in software or web Front end and Backend service development Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WAH)
Posted 2 months ago
4 - 7 years
7 - 11 Lacs
Maharashtra
Work from Office
Sound Telecom Assurance Domain experience for Mobile and Fixed technology Must have hands on exp of NAC or similar Assurance COTs like Nokia NAC Assurance, Service Now etc. 4+ exp onJava, Python, Apache Nifi, Graphana, Cloud Platform, microservices, Neo4J Database scripts, Rest APIs, SPOG dashboard. Have strong exp. on Assurance process like Service Impact analysis, Close Loop Assurance, Automation, Incident creating, Incident process, Anomaly detection, and complete e2e Assurance process
Posted 2 months ago
3 - 8 years
13 - 23 Lacs
Chennai, Pune, Bengaluru
Work from Office
Experience: 3+ years Rate: 23 LPA Location: Pune(Hinjewadi), Bangalore(Adugodi), Chennai(Sholinganallur) Mandate skills: Database developer, Neo4J, MongoDB We're seeking a passionate and innovative Neo4j Developer with MongoDB experience to join our dynamic team. You'll have an important role in constructing the future of our data-driven organization and making a significant contribution to realizing our ambitious goals. What You'll Do: 1. Develop, deploy, and manage the Neo4j graph database systems to empower our tech-driven solutions. 2. Leverage your MongoDB experience to optimize our database structures, tracking, and functioning to ensure seamless business operations. 3. Participate in the iterative development of a well-defined data model and design coherent and functional data constructs. 4. Maintain a strong focus on data normalization and node data relationships, ensuring top-notch data integrity and consistency. 5. Implement efficient coding methodologies and best practices, provide technical guidance and problem-solving expertise to the team. 6. Collaborate extensively with software engineers and data scientists to integrate Neo4j solutions within our broader tech stack and data ecosystems. 7. Contribute to data security and recovery measures, data transformation, and migration tasks. What You'll Bring: 1. Bachelor's or Master's Degree in Computer Science, Information Systems, or a related field. 2. Deep expertise in Neo4j and MongoDB databases. 3. Solid understanding of NoSQL database models and Graph algorithms. 4. Exceptional programming skills in languages like Python, Java, and, JavaScript. 5. Understanding of data constraints including ACID versus BASE properties. 6. Excellent troubleshooting skills and a firm grasp on data management principles. 7. Familiarity with Agile and Scrum methodologies. 8. Strong communication skills, both written and verbal, with the ability to articulate complex concepts to non-technical team members.
Posted 2 months ago
2 - 5 years
8 - 15 Lacs
Chennai
Work from Office
Responsibilities: Lead the integration of Perplexity.ai and ChatGPT APIs for real-time chatbot interactions. Design and implement intent extraction and NLP models for conversational AI. Design and Implement RAG (Retrieval-Augmented Generation) pipelines for context-aware applications Guide junior developers and ensure best practices in LLM application development. Set up Python API security, and performance monitoring. Requirements: Deep knowledge of NLP, LLMs (ChatGPT, GPT-4, Perplexity.ai, LangChain). Experience with Graph Databases (Neo4j) and RAG techniques. Familiarity with vector databases (Pinecone, Weaviate, FAISS). Strong expertise in Python (FastAPI, Flask) and/or Node.js (Express, NestJS). Good to have - experience with containerization (Docker, Kubernetes). Excellent problem-solving skills and experience leading a small AI/ML team.
Posted 2 months ago
3 - 6 years
5 - 8 Lacs
Gurgaon
Work from Office
We are looking for Data Engineer who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Design is an iterative process, whether for UX, services or infrastructure. Our goal is to drive up modernizing and improving application capabilities. Job Responsibilities As a Data Engineer, you will be joining our Data Engineering & Modernization team transforming our global financial network and improving our data products and services we provide to our internal customers. This team will leverage cutting edge data engineering & modernization techniques to develop scalable solutions for managing data and building data products. In this role, you are expected to Involve from inception of projects to understand requirements, architect, develop, deploy, and maintain data. Work in a multi-disciplinary, agile squad which involves partnering with program and product managers to expand product offering based on business demands. Focus on speed to market and getting data products and services in the hands of our stakeholders and passion to transform financial industry is key to the success of this role. Maintain a positive and collaborative working relationship with teams within the NCR Atleos technology organization, as well as with wider business. Creative and inventive problem-solving skills for reduced turnaround times are required, and valued, and will be a major part of the job. And Ideal candidate would have: BA/BS in Computer Science or equivalent practical experience Experience applying machine learning and AI techniques on modernizing data and reporting use cases. Overall 3+ years of experience on Data Analytics or Data Warehousing projects. At least 2+ years of Cloud experience on AWS/Azure/GCP, preferred Azure. Microsoft Azure, ADF, Synapse. Programming in Python, PySpark, with experience using pandas, ml libraries etc. Data streaming with Flink/Spark structured streaming. Open-source orchestration frameworks like DBT, ADF, AirFlow Open-source data ingestion frameworks like Airbyte, Debezium Experience migrating from traditional on-prem OLTP/OLAP databases to cloud native DBaaS and/or NoSQL databases like Cassandra, Neo4J, Mongo DB etc. Deep expertise operating in a cloud environment, and with cloud native databases like Cosmos DB, Couchbase etc. Proficiency in various data modelling techniques, such as ER, Hierarchical, Relational, or NoSQL modelling. Excellent design, development, and tuning experience with SQL (OLTP and OLAP) and NoSQL databases. Experience with modern database DevOps tools like Liquibase or Redgate Flyway or DBmaestro. Deep understanding of data security and compliance, and related architecture Deep understanding and strong administrative experience with distributed data processing frameworks such as Hadoop, Spark, and others Experience with programming languages like Python, Java, Scala, and machine learning libraries. Experience with dev ops tools like Git, Maven, Jenkins, GitHub Actions, Azure DevOps Experience with Agile development concepts and related tools. Ability to tune and trouble shoot performance issues across the codebase and database queries. Excellent problem-solving skills, with the ability to think critically and creatively to develop innovative data solutions. Excellent written and strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. Passion for learning with a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Additional Skills: Leverage machine learning and AI techniques on operationalizing data pipelines and building data products. Provide data services using APIs. Containerization data products and services using Kubernetes and/or Docker.
Posted 2 months ago
6 - 11 years
15 - 25 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred. Pharma exp MUST
Posted 2 months ago
3 - 5 years
9 - 13 Lacs
Pune
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Neo4j Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Minimum 15 years of full time education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, and utilizing Neo4j to develop and maintain the data platform. Roles & Responsibilities: Assist with the blueprint and design of the data platform components. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain the data platform utilizing Neo4j. Ensure the data platform is scalable, reliable, and secure. Troubleshoot and resolve any issues related to the data platform. Professional & Technical Skills: Must To Have Skills:Experience with Neo4j. Strong understanding of data platform components and architecture. Experience with data modeling and database design. Experience with ETL processes and tools. Experience with cloud-based data platforms such as AWS or Azure. Additional Information: The candidate should have a minimum of 3 years of experience with Neo4j. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Pune office. Qualification Minimum 15 years of full time education
Posted 2 months ago
3 - 5 years
9 - 13 Lacs
Pune
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Neo4j Good to have skills : Database Management Minimum 3 year(s) of experience is required Educational Qualification : Minimum 15 years of full time education Roles & Responsibilities:1. Implementation of new Enhancement to existing UI2. Migration of Code to updated frameworks from unsupported and planned to be retired frameworks3. Refactor Code for Maintenance and stability 4. Migration to GraphDB 5. Bugfix of defects Professional & Technical Skills:1. 6+ years of IT experience2. Strong & proven development skills in RDF Triple Store:Apache JENA TBD & FUSEKI, Query (SPARQL, SPASQL), Ontotext GraphDB, Knowledge Graphs, Dictionaries/Ontologies (RDF, RDFS, OWL, SKOS, Schema.org), Logical Rules (SWRL, SPIN, R2RML, SHACL)3. Excellent analytical and problem-solving skills 4. Prior experience of working in Agile work environment5. Good to have experience on NoSQL based databases like MongoDB, Neo4j etc.,6. Strong communication skills Additional Information: The candidate should have a minimum of 3 years of experience in Neo4j The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Pune office. Qualification Minimum 15 years of full time education
Posted 2 months ago
2 - 3 years
4 - 5 Lacs
Bengaluru
Work from Office
We are looking for: Experience At least2to 3yrs of experience in NodeJS, TypeScript, React is required Proven experience in building, deploying, maintaining & scaling APIs, microservices Job Responsibilities Solid experience in NodeJS, TypeScript, React, Neo4j and Firestore (GCP) . In-depth knowledge in software design & development practices Design and develop scalable systems using advanced concepts in NodeJS, TypeScript, Javascript, and React. Should have a better understanding of deploying and working with GKE. Ability to design for scale and performance/peer code reviews Architecture/platform development, API, data modelling at scale Excellent working experience in Express, Knex, Serverless GC Functions Solid experience in JavaScript Frameworks (Angular / React. JS), Redux, JavaScript , JQuery, CSS, HTML5, ES5, ES6 & ES7, in-memory databases (Redis / Hazelcast), Build tools (web pack) Good Error and Exceptional Handling Skills. Ability to work with Git repositories, and remote code hosting services like GitHub and Gitlab Ability to deliver amazing results with minimal guidance and supervision Passionate (especially about web development!), highly motivated, and fun to work with Node . Js, Fullstack, React, Gcp, Api, Firestore, Neo4j
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Neo4j, a popular graph database management system, is seeing a growing demand in the job market in India. Companies are looking for professionals who are skilled in working with Neo4j to manage and analyze complex relationships in their data. If you are a job seeker interested in Neo4j roles, this article will provide you with valuable insights to help you navigate the job market in India.
The average salary range for Neo4j professionals in India varies based on experience levels. - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the Neo4j skill area, a typical career progression may look like: - Junior Developer - Developer - Senior Developer - Tech Lead
Apart from expertise in Neo4j, professionals in this field are often expected to have or develop skills in: - Cypher Query Language - Data modeling - Database management - Java or Python programming
As you explore Neo4j job opportunities in India, it's essential to not only possess the necessary technical skills but also be prepared to showcase your expertise during interviews. Stay updated with the latest trends in Neo4j and continuously enhance your skills to stand out in the competitive job market. Prepare thoroughly, demonstrate your knowledge confidently, and land your dream Neo4j job in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2