Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 5.0 years
4 - 6 Lacs
Hyderabad
Work from Office
We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Doctorate Degree with 3-5 + years of experience in Computer Science, IT or related field OR Masters degree with 6 - 8 + years of experience in Computer Science, IT or related field OR Bachelors degree with 10 - 12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills
Posted 6 days ago
2.0 - 5.0 years
4 - 7 Lacs
Hyderabad
Work from Office
We are seeking a skilled and creative RShiny Developer with hands-on experience in MarkLogic and graph databases. You will be responsible for designing and developing interactive web applications using RShiny, integrating complex datasets stored in MarkLogic, and leveraging graph capabilities for advanced analytics and knowledge representation. Roles & Responsibilities: Develop interactive dashboards and web applications using RShiny. Connect and query data from MarkLogic, especially leveraging its graph and semantic features (e.g., RDF triples, SPARQL). Design and maintain backend data workflows and APIs. Collaborate with data scientists, analysts, and backend engineers to deliver integrated solutions. Optimize performance and usability of RShiny applications. Functional Skills: Must-Have Skills: Proven experience with R and RShiny in a production or research setting. Proficiency with MarkLogic , including use of its graph database features (triples, SPARQL queries, semantics). Familiarity with XQuery , XPath , or REST APIs for interfacing with MarkLogic. Strong understanding of data visualization principles and UI/UX best practices. Experience with data integration and wrangling. Good-to-Have Skills: Experience with additional graph databases (e.g., Neo4j, Stardog) is a plus. Background in knowledge graphs, linked data, or ontologies (e.g., OWL, RDF, SKOS). Familiarity with front-end frameworks (HTML/CSS/JavaScript) to enhance RShiny applications. Experience in regulated industries (e.g., pharma, finance) or with complex domain ontologies. Professional Certifications (preferred): SAFe Methodology Courses in R, RShiny, and data visualization from reputable institutions (e.g., Johns Hopkins Data Science Specialization on Coursera) Other Graph Certifications (optional but beneficial) Neo4j Certified Professional (to demonstrate transferable graph database skills) Linked Data and Semantic Web Training (via organizations like W3C or OReilly) Soft Skills: Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task management skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects.
Posted 1 week ago
6.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you are responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing (to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Leading and being hands-on for the technical design, development, testing, implementation, and support of data pipelines that load the data domains in the Enterprise Data Fabric and associated data services. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Be able to translate data models (ontology, relational) into physical designs that performant, maintainable, easy to use. Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical Collaboration with RunOps engineers to continuously increase our ability to push changes into production with as little manual overhead and as much speed as possible. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience OR Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience OR Diploma and 10 to 12 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficient in SQL for extracting, transforming, and analyzing complex datasets from both relational and graph data stores ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Experience with ETL tools such as Apache Spark, Prophecy and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Able to take user requirements and develop data models for data analytics use cases. Good-to-Have Skills: Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Experience using graph databases such as Stardog , Marklogic , Neo4J , Allegrograph, etc. and writing SPARQL queries. Experience working with agile development methodologies such as Scaled Agile. Professional Certifications AWS Certified Data Engineer preferred Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. Equal opportunity statement Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com
Posted 2 weeks ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Advanced Application Engineer Project Role Description : Utilize modular architectures, next-generation integration techniques and a cloud-first, mobile-first mindset to provide vision to Application Development Teams. Work with an Agile mindset to create value across projects of multiple scopes and scale. Must have skills : Marklogic Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Advanced Application Engineer, you will utilize modular architectures and next-generation integration techniques to provide vision to Application Development Teams. Your typical day will involve collaborating with cross-functional teams, engaging in Agile practices, and contributing to the development of innovative solutions that enhance project outcomes and drive value across various initiatives. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate knowledge sharing sessions to enhance team capabilities.- Engage in continuous learning to stay updated with industry trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Marklogic.- Strong understanding of modular architecture principles.- Experience with cloud integration techniques.- Familiarity with Agile methodologies and practices.- Ability to work collaboratively in a team-oriented environment. Additional Information:- The candidate should have minimum 3 years of experience in Marklogic.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 2 weeks ago
6.0 - 9.0 years
32 - 35 Lacs
Noida, Kolkata, Chennai
Work from Office
Dear Candidate, We are hiring a Lua Developer to create lightweight scripting layers in games, embedded systems, or automation tools. Key Responsibilities: Develop scripts and integrations using Lua Embed Lua in C/C++ applications for extensibility Write custom modules or bindings for game engines or IoT devices Optimize Lua code for memory and execution time Integrate with APIs, data sources, or hardware systems Required Skills & Qualifications: Proficient in Lua and its integration with host languages Experience with Love2D , Corona SDK , or custom engines Familiarity with C/C++ , embedded Linux , or IoT Bonus: Game scripting or automation experience Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 2 weeks ago
4 - 6 years
11 - 15 Lacs
Hyderabad
Work from Office
Data Engineering Manager What you will do Let’s do this. Let’s change the world. In this vital role you will lead a team of data engineers to build, optimize, and maintain scalable data architectures, data pipelines, and operational frameworks that support real-time analytics, AI-driven insights, and enterprise-wide data solutions. As a strategic leader, the ideal candidate will drive best practices in data engineering, cloud technologies, and Agile development, ensuring robust governance, data quality, and efficiency. The role requires technical expertise, team leadership, and a deep understanding of cloud data solutions to optimize data-driven decision-making. Lead and mentor a team of data engineers, fostering a culture of innovation, collaboration, and continuous learning for solving complex problems of R&D division. Oversee the development of data extraction, validation, and transformation techniques, ensuring ingested data is of high quality and compatible with downstream systems. Guide the team in writing and validating high-quality code for data ingestion, processing, and transformation, ensuring resiliency and fault tolerance. Drive the development of data tools and frameworks for running and accessing data efficiently across the organization. Oversee the implementation of performance monitoring protocols across data pipelines, ensuring real-time visibility, alerts, and automated recovery mechanisms. Coach engineers in building dashboards and aggregations to monitor pipeline health and detect inefficiencies, ensuring optimal performance and cost-effectiveness. Lead the implementation of self-healing solutions, reducing failure points and improving pipeline stability and efficiency across multiple product features. Oversee data governance strategies, ensuring compliance with security policies, regulations, and data accessibility best practices. Guide engineers in data modeling, metadata management, and access control, ensuring structured data handling across various business use cases. Collaborate with business leaders, product owners, and cross-functional teams to ensure alignment of data architecture with product requirements and business objectives. Prepare team members for key partner discussions by helping assess data costs, access requirements, dependencies, and availability for business scenarios. Drive Agile and Scaled Agile (SAFe) methodologies, handling sprint backlogs, prioritization, and iterative improvements to enhance team velocity and project delivery. Stay up-to-date with emerging data technologies, industry trends, and best practices, ensuring the organization uses the latest innovations in data engineering and architecture. What we expect of you We are all different, yet we all use our unique contributions to serve patients. We are seeking a seasoned Engineering Manager (Data Engineering) to drive the development and implementation of our data strategy with deep expertise in R&D of Biotech or Pharma domain. Basic Qualifications: Doctorate degree OR Master’s degree and 4 to 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree and 6 to 8 years of experience in Computer Science, IT or related field OR Diploma and 10 to 12 years of experience in Computer Science, IT or related field Experience leading a team of data engineers in the R&D domain of biotech/pharma companies. Experience architecting and building data and analytics solutions that extract, transform, and load data from multiple source systems. Data Engineering experience in R&D for Biotechnology or pharma industry Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Proficiency in Python, PySpark, SQL. Experience with dimensional data modeling. Experience working with Apache Spark, Apache Airflow. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Experienced with AWS or GCP or Azure cloud services. Understanding of end to end project/product life cycle Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Preferred Qualifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Project Management certifications preferred Data Engineering Management experience in Biotech/Pharma is a plus Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 month ago
3 - 7 years
4 - 7 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Senior Data Engineer with expertise in Graph Data technologies to join our data engineering team and contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a strong background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Own the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Strong understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Master’s degree and 3 to 4 + years of Computer Science, IT or related field experience Bachelor’s degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly
Posted 2 months ago
4 - 6 years
4 - 7 Lacs
Hyderabad
Work from Office
Senior Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will contribute to the development of scalable, high-performance data pipelines and advanced data models that power next-generation applications and analytics. This role combines core data engineering skills with specialized knowledge in graph data structures, graph databases, and relationship-centric data modeling, enabling the organization to leverage connected data for deep insights, pattern detection, and advanced analytics use cases. The ideal candidate will have a solid background in data architecture, big data processing, and Graph technologies and will work closely with data scientists, analysts, architects, and business stakeholders to design and deliver graph-based data engineering solutions. . Roles & Responsibilities: Design, build, and maintain robust data pipelines using Databricks (Spark, Delta Lake, PySpark) for complex graph data processing workflows. Lead the implementation of graph-based data models, capturing complex relationships and hierarchies across domains. Build and optimize Graph Databases such as Stardog, Neo4j, Marklogic or similar to support query performance, scalability, and reliability. Implement graph query logic using SPARQL, Cypher, Gremlin, or GSQL, depending on platform requirements. Collaborate with data architects to integrate graph data with existing data lakes, warehouses, and lakehouse architectures. Work closely with data scientists and analysts to enable graph analytics, link analysis, recommendation systems, and fraud detection use cases. Develop metadata-driven pipelines and lineage tracking for graph and relational data processing. Ensure data quality, governance, and security standards are met across all graph data initiatives. Mentor junior engineers and contribute to data engineering best practices, especially around graph-centric patterns and technologies. Stay up to date with the latest developments in graph technology, graph ML, and network analytics. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 4 to 6 years of Computer Science, IT or related field experience Bachelor’s degree and 6 to 8 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience in Databricks, including PySpark, Delta Lake, and notebook-based development. Hands-on experience with graph database platforms such as Stardog, Neo4j, Marklogic etc. Good understanding of graph theory, graph modeling, and traversal algorithms Proficiency in workflow orchestration, performance tuning on big data processing Good understanding of AWS services Ability to quickly learn, adapt and apply new technologies with strong problem-solving and analytical skills Excellent collaboration and communication skills, with experience working with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
7 - 10 years
10 - 18 Lacs
Pune
Work from Office
Role & responsibilities Experience around 5-8 yrs Proficient in MarkLogic, XQuery, Optic query and Semantic Technologies Hands-on experience with datahub framework. Hands-on experience with Apache Nifi. Should be comfortable using cloud platforms specifically Azure. Strong experience in designing and developing REST APIs. Knowledge of tools such as Git bash, Gradle and any one IDE. Experience in working as part of DevOps team. Good communication
Posted 3 months ago
7 - 10 years
10 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities Experience around 5-8 yrs Proficient in MarkLogic, XQuery, Optic query and Semantic Technologies Hands-on experience with datahub framework. Hands-on experience with Apache Nifi. Should be comfortable using cloud platforms specifically Azure. Strong experience in designing and developing REST APIs. Knowledge of tools such as Git bash, Gradle and any one IDE. Experience in working as part of DevOps team. Good communication
Posted 3 months ago
3 - 5 years
8 - 18 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Marklogic Developer Mandatory Sills : Marklogic minimum3 years experience Location : Pan India ( HYBRID MODE OF WORKING ) Only Immediate Joiners Apply!! Role & responsibilities 3-5 year experience Proficient in MarkLogic, XQuery, Optic query and Semantic Technologies Hands-on experience with datahub framework. Hands-on experience with Apache Nifi. Should be comfortable using cloud platforms specifically Azure. Strong experience in designing and developing REST APIs. Knowledge of tools such as Git bash, Gradle and any one IDE. Experience in working as part of DevOps team. Good communication Perks and benefits Competitive salary and performance-based bonuses. PF, medical insurance, Statutory benefits Professional development opportunities.
Posted 3 months ago
5 - 10 years
8 - 15 Lacs
Bengaluru
Work from Office
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : IBM DB2 Database Administration Good to have skills : MySQL, Marklogic Minimum 5 year(s) of experience is required Educational Qualification : btech completion Summary :As an Application Technical Support Practitioner, you will be responsible for providing ongoing support to clients and ensuring the smooth functioning of systems or applications. Your typical day will involve interfacing with clients, accurately defining and resolving issues, and utilizing your deep product knowledge to design effective solutions. Roles & Responsibilities: Act as the primary point of contact for clients, providing ongoing support and ensuring the smooth functioning of systems or applications. Utilize exceptional communication skills to accurately define client issues and interpret and design effective solutions based on deep product knowledge. Collaborate with cross-functional teams to ensure timely resolution of client issues and to continuously improve system or application performance. Stay updated with the latest advancements in IBM DB2 Database Administration, Marklogic, MySQL, and other relevant technologies to provide the best possible support to clients. Professional & Technical Skills: Must To Have Skills:Expertise in IBM DB2 Database Administration. Good To Have Skills:Experience with Marklogic and MySQL. Strong understanding of system or application architecture and design principles. Excellent problem-solving and analytical skills. Exceptional communication and interpersonal skills. Additional Information: The candidate should have a minimum of 5 years of experience in IBM DB2 Database Administration. The JOB FAMILY and PROJECT ROLE information are not for candidate's experience. This position is based at our Bengaluru office. Qualification btech completion
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2