Home
Jobs
Companies
Resume

172 Elt Jobs - Page 4

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4 - 6 years

30 - 34 Lacs

Bengaluru

Work from Office

Naukri logo

Overview Annalect is seeking a hands-on Data QA Manager to lead and elevate data quality assurance practices across our growing suite of software and data products. This is a technical leadership role embedded within our Technology teams, focused on establishing best-in-class data quality processes that enable trusted, scalable, and high-performance data solutions. As a Data QA Manager, you will drive the design, implementation, and continuous improvement of end-to-end data quality frameworks, with a strong focus on automation, validation, and governance. You will work closely with data engineering, product, and analytics teams to ensure data integrity, accuracy, and compliance across complex data pipelines, platforms, and architectures, including Data Mesh and modern cloud-based ecosystems. This role requires deep technical expertise in SQL, Python, data testing frameworks like Great Expectations, data orchestration tools (Airbyte, DbT, Trino, Starburst), and cloud platforms (AWS, Azure, GCP). You will lead a team of Data QA Engineers while remaining actively involved in solution design, tool selection, and hands-on QA execution. Responsibilities Key Responsibilities: Develop and implement a comprehensive data quality strategy aligned with organizational goals and product development initiatives. Define and enforce data quality standards, frameworks, and best practices, including data validation, profiling, cleansing, and monitoring processes. Establish data quality checks and automated controls to ensure the accuracy, completeness, consistency, and timeliness of data across systems. Collaborate with Data Engineering, Product, and other teams to design and implement scalable data quality solutions integrated within data pipelines and platforms. Define and track key performance indicators (KPIs) to measure data quality and effectiveness of QA processes, enabling actionable insights for continuous improvement. Generate and communicate regular reports on data quality metrics, issues, and trends to stakeholders, highlighting opportunities for improvement and mitigation plans. Maintain comprehensive documentation of data quality processes, procedures, standards, issues, resolutions, and improvements to support organizational knowledge-sharing. Provide training and guidance to cross-functional teams on data quality best practices, fostering a strong data quality mindset across the organization. Lead, mentor, and develop a team of Data QA Analysts/Engineers, promoting a high-performance, collaborative, and innovative culture. Provide thought leadership and subject matter expertise on data quality, influencing technical and business stakeholders toward quality-focused solutions. Continuously evaluate and adopt emerging tools, technologies, and methodologies to advance data quality assurance capabilities and automation. Stay current with industry trends, innovations, and evolving best practices in data quality, data engineering, and analytics to ensure cutting-edge solutions. Qualifications Required Skills 11+ years of hands-on experience in Data Quality Assurance, Data Test Automation, Data Comparison, and Validation across large-scale datasets and platforms. Strong proficiency in SQL for complex data querying, data validation, and data quality investigations across relational and distributed databases. Deep knowledge of data structures, relational and non-relational databases, stored procedures, packages, functions, and advanced data manipulation techniques. Practical experience with leading data quality tools such as Great Expectations, DbT tests, and data profiling and monitoring solutions. Experience with data mesh and distributed data architecture principles for enabling decentralized data quality frameworks. Hands-on experience with modern query engines and data platforms, including Trino/Presto, Starburst, and Snowflake. Experience working with data integration and ETL/ELT tools such as Airbyte, AWS Glue, and DbT for managing and validating data pipelines. Strong working knowledge of Python and related data libraries (e.g., Pandas, NumPy, SQLAlchemy) for building data quality tests and automation scripts.

Posted 1 month ago

Apply

7 - 12 years

19 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Data Engineer Consultant Position Overview: We are seeking a highly skilled and experienced Data Engineering to join our team. The ideal candidate will have a strong background in programming, data management, and cloud infrastructure, with a focus on designing and implementing efficient data solutions. This role requires a minimum of 5+ years of experience and a deep understanding of Azure services and infrastructure and ETL/ELT solutions. Key Responsibilities: Azure Infrastructure Management: Own and maintain all aspects of Azure infrastructure, recommending modifications to enhance reliability, availability, and scalability. Security Management: Manage security aspects of Azure infrastructure, including network, firewall, private endpoints, encryption, PIM, and permissions management using Azure RBAC and Databricks roles. Technical Troubleshooting: Diagnose and troubleshoot technical issues in a timely manner, identifying root causes and providing effective solutions. Infrastructure as Code: Create and maintain Azure Infrastructure as Code using Terraform and GitHub Actions. CI/CD Pipelines: Configure and maintain CI/CD pipelines using GitHub Actions for various Azure services such as ADF, Databricks, Storage, and Key Vault. Programming Expertise: Utilize your expertise in programming languages such as Python to develop and maintain data engineering solutions. Generative AI and Language Models: Knowledge of Language Models (LLMs) and Generative AI is a plus, enabling the integration of advanced AI capabilities into data workflows. Real-Time Data Streaming: Use Kafka for real-time data streaming and integration, ensuring efficient data flow and processing. Data Management: Proficiency in Snowflake for data wrangling and management, optimizing data structures for analysis. DBT Utilization: Build and maintain data marts and views using DBT, ensuring data is structured for optimal analysis. ETL/ELT Solutions: Design ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks, leveraging methodologies to acquire data from various structured or semi-structured source systems. Communication: Strong communication skills to explain technical issues and solutions clearly to the Engineering Lead and key stakeholders (as required) Qualifications: Minimum of 5+ years of experience in designing ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks.OR Snowflake Expertise in programming languages such as Python. Experience with Kafka for real-time data streaming and integration. Proficiency in Snowflake for data wrangling and management. Proven ability to use DBT to build and maintain data marts and views. Experience in creating and maintaining Azure Infrastructure as Code using Terraform and GitHub Actions. Ability to configure, set up, and maintain GitHub for various code repositories. Experience in creating and configuring CI/CD pipelines using GitHub Actions for various Azure services. In-depth understanding of managing security aspects of Azure infrastructure. Strong problem-solving skills and ability to diagnose and troubleshoot technical issues. Excellent communication skills for explaining technical issues and solutions.

Posted 1 month ago

Apply

5 - 7 years

8 - 10 Lacs

Noida

Work from Office

Naukri logo

What you need BS in an Engineering or Science discipline, or equivalent experience 5+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 3 years experience in a data and BI focused role Experience in data integration (ETL/ELT) development using multiple languages (e.g., Python, PySpark, SparkSQL) and data transformation (e.g., dbt) Experience building data pipelines supporting a variety of integration and information delivery methods as well as data modelling techniques and analytics Knowledge and experience with various relational databases and demonstrable proficiency in SQL and data analysis requiring complex queries, and optimization Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena, etc.) and Snowflake CDW, as well as BI tools (e.g., PowerBI) Willingness to experiment and learn new approaches and technology applications Knowledge of software engineering and agile development best practices Excellent written and verbal communication skills

Posted 1 month ago

Apply

3 - 5 years

6 - 10 Lacs

Gurugram

Work from Office

Naukri logo

Position Summary: A Data Engineer designs and maintains scalable data pipelines and storage systems, with a focus on integrating and processing knowledge graph data for semantic insights. They enable efficient data flow, ensure data quality, and support analytics and machine learning by leveraging advanced graph-based technologies. How You"™ll Make an Impact (responsibilities of role) Build and optimize ETL/ELT pipelines for knowledge graphs and other data sources. Design and manage graph databases (e.g., Neo4j, AWS Neptune, ArangoDB). Develop semantic data models using RDF, OWL, and SPARQL. Integrate structured, semi-structured, and unstructured data into knowledge graphs. Ensure data quality, security, and compliance with governance standards. Collaborate with data scientists and architects to support graph-based analytics. What You Bring (required qualifications and skills) Bachelor"™s/master"™s in computer science, Data Science, or related fields. Experience3+ years of experience in data engineering, with knowledge graph expertise. Proficiency in Python, SQL, and graph query languages (SPARQL, Cypher). Experience with graph databases and frameworks (Neo4j, GraphQL, RDF). Knowledge of cloud platforms (AWS, Azure). Strong problem-solving and data modeling skills. Excellent communication skills, with the ability to convey complex concepts to non-technical stakeholders. The ability to work collaboratively in a dynamic team environment across the globe.

Posted 1 month ago

Apply

4 - 9 years

14 - 18 Lacs

Noida

Work from Office

Naukri logo

Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture We"™re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.

Posted 1 month ago

Apply

4 - 7 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities: Design, develop, and maintain scalable data pipelines using Python and AWS Redshift Optimize and tune Redshift queries, schemas, and performance for large-scale datasets Implement ETL/ELT processes to ensure accurate and timely data availability Collaborate with data analysts, engineers, and product teams to understand data requirements Ensure data quality, consistency, and integrity across systems Automate data workflows and improve pipeline efficiency using scripting and orchestration tools Monitor data pipeline performance and troubleshoot issues proactively Maintain documentation for data models, pipelines, and system configurations Ensure compliance with data governance and security standards

Posted 1 month ago

Apply

12 - 15 years

0 - 0 Lacs

Thiruvananthapuram

Work from Office

Naukri logo

About the Role We're looking for a sharp and experienced Consultant/Architect to guide our clients in strategically adopting Microsoft Fabric, with a specific focus on the design and implementation of the gold (semantic, reporting-ready) and silver (cleansed, integrated) data layers. Key Responsibilities Design end-to-end data architectures in Microsoft Fabric with a focus on the gold and silver layers. Collaborate with business units to understand data landscapes and translate requirements into actionable adoption strategies. Lead implementations involving Fabric components such as Data Factory, Synapse Data Warehouse, Data Lakehouse , and Power BI datasets . Develop and optimize data models for the silver and gold layers considering performance, scalability, and usability. Define and enforce data governance policies and standards within Microsoft Fabric. Identify and resolve performance issues in Fabric, especially in query performance and data processing. Mentor client and internal teams on best practices and architecture patterns for gold/silver layer development. Troubleshoot complex integration and transformation challenges in the Fabric ecosystem. Required Qualifications Deep understanding of modern data warehousing principles . Extensive experience designing and implementing data solutions on Microsoft Azure . Hands-on expertise with Microsoft Fabric components for building pipelines and analytical models. Proven ability to design dimensional and relational data models optimized for reporting (gold layer). Strong grasp of ETL/ELT patterns for building integrated data layers (silver layer). Experience with data quality checks and governance frameworks. Strong SQL skills for data manipulation and querying. Proficient in Power BI and its integration with Fabric datasets. Excellent communication and client engagement skills, with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications Bachelor's degree in Computer Science, Data Science , or a related field (or equivalent experience). Microsoft certifications related to Azure data services or Fabric . Experience with real-time data processing and streaming in Fabric. Familiarity with data science workflows and their integration into Fabric-based solutions. Experience with Azure AI Foundry is a plus. Required Skills Fabric,Microsoft Technologies,Research Analysis

Posted 1 month ago

Apply

8 - 13 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Education: A Bachelors degree in Computer Science, Engineering (B.Tech, BE), or a related field such as MCA (Master of Computer Applications) is required for this role. Experience: 8+ years in data engineering with a focus on building scalable and reliable data infrastructure. Skills: Language: Proficiency in Java or Python or Scala. Prior experience in Oil Gas, Titles Leases, or Financial Services is a must have. Databases: Expertise in relational and NoSQL databases like PostgreSQL, MongoDB, Redis, and Elasticsearch. Data Pipelines: Strong experience in designing and implementing ETL/ELT pipelines for large datasets. Tools: Hands-on experience with Databricks, Spark, and cloud platforms. Data Lakehouse: Expertise in data modeling, designing Data Lakehouses, and building data pipelines. Modern Data Stack: Familiarity with modern data stack and data governance practices. Data Orchestration: Proficient in data orchestration and workflow tools. Data Modeling: Proficient in modeling and building data architectures for high-throughput environments. Stream Processing: Extensive experience with stream processing technologies such as Apache Kafka. Distributed Systems: Strong understanding of distributed systems, scalability, and availability. DevOps: Familiarity with DevOps practices, continuous integration, and continuous deployment (CI/CD). Problem-Solving: Strong problem-solving skills with a focus on scalable data infrastructure. Key Responsibilities: This is a role with high expectations of hands on design and development. Design and development of systems for ingestion, persistence, consumption, ETL/ELT, versioning for different data types e.g. relational, document, geospatial, graph, timeseries etc. in transactional and analytical patterns. Drive the development of applications related to data extraction, especially from formats like TIFF, PDF, and others, including OCR and data classification/categorization. Analyze and improve the efficiency, scalability, and reliability of our data infrastructure. Assist in the design and implementation of robust ETL/ELT pipelines for processing large volumes of data. Collaborate with cross-functional scrum teams to respond quickly and effectively to business needs. Work closely with data scientists and analysts to define data requirements and develop comprehensive data solutions. Implement data quality checks and monitoring to ensure data integrity and reliability across all systems. Develop and maintain data models, schemas, and documentation to support data-driven decision-making. Manage and scale data infrastructure on cloud platforms, leveraging cloud-native tools and services. Benefits: Salary: Competitive and aligned with local standards. Performance Bonus: According to company policy. Benefits: Includes medical insurance and group term life insurance. Continuous learning and development.10 recognized public holidays. Parental Leave

Posted 1 month ago

Apply

8 - 13 years

6 - 11 Lacs

Gurugram

Work from Office

Naukri logo

AHEAD is looking for a Senior Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Senior Data Engineer will be responsible for strategic planning and hands-on engineering of Big Data and cloud environments that support our clients advanced analytics, data science, and other data platform initiatives. This consultant will design, build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. They will be expected to be hands-on technically, but also present to leadership, and lead projects. The Senior Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Senior Data Engineer, you will design and implement data pipelines to enable analytics and machine learning on rich datasets. Roles and Responsibilities A Data Engineer should be able to design, build, operationalize, secure, and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Engineers APIs for returning data from these structures to the Enterprise Applications Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications 8+ years of professional technical experience 4+ years of hands-on Data Architecture and Data Modelling. 4+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 4+ years of programming languages such as Python 2+ years of experience working in cloud environments (AWS and/or Azure) Strong client-facing communication and facilitation skills Key Skills Python, Cloud, Linux, Windows, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Snowflake, SQL/RDBMS, OLAP, Data Engineering

Posted 1 month ago

Apply

9 - 12 years

30 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Lets do this. Lets change the world. In this vital role We are seeking a strategic and hands-on Specialist Software Engineer / AI Engineer Search to lead the design, development, and deployment of AI-powered search and knowledge discovery solutions across our pharmaceutical enterprise. In this role, you'll manage a team of engineers and work closely with data scientists, oncologists, and domain experts to build intelligent systems that help users across R&D, medical, and commercial functions find relevant, actionable information quickly and accurately. Architect and lead the development of scalable, intelligent search systems leveraging NLP, embeddings, LLMs, and vector search Own the end-to-end lifecycle of search solutions, from ingestion and indexing to ranking, relevancy tuning, and UI integration Build systems that surface scientific literature, clinical trial data, regulatory content, and real-world evidence using semantic and contextual search Integrate AI models that improve search precision, query understanding, and result summarization (e.g., generative answers via LLMs). Partner with platform teams to deploy search solutions on scalable infrastructure (e.g., Kubernetes, cloud-native services, Databricks, Snowflake). Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement. Building and owning the next generation of content knowledge platforms and other algorithms/systems that create high quality and unique experiences. Designing and implementing advanced AI Models for entity matching, data duplication. Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Manage and mentor a cross-functional engineering team focused on AI, ML, and search infrastructure. Foster a collaborative, high-performance engineering culture with a focus on innovation and delivery. Work with domain experts, data stewards, oncologists, and product managers to align search capabilities with business and scientific need Basic Qualifications: Degree in computer science & engineering preferred with 9-12 years of software development experience Proficient in Spark, Kafka, Snowflake, Delta Lake, Hadoop, databricks, Mongo DB, S3 Buckets, ELT& ETL, API Integrations, Java Proven experience building search systems with technologies like Elasticsearch, Solr, OpenSearch, or vector DBs (e.g., Pinecone, FAISS). Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Strong understanding of NLP, embeddings, transformers, and LLM-based search applications Proficient in programming language AI/ML, Python, GraphQL, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experience deploying ML services and search infrastructure in cloud environments (AWS, Azure, or GCP) Preferred Qualifications: Experience in AI/ML, Java, Rest API, Python React, GraphQL, NLMS, Full stack applications, Solr Search Experienced with Fast Pythons API Experience with design patterns, data structures, data modelling, data algorithms Knowledge of ontologies and taxonomies such as MeSH, SNOMED CT, UMLS, or MedDRA. Familiarity with MLOps, CI/CD for ML, and monitoring of AI models in production. Experienced with AWS /Azure Platform, building and deploying the code Experience in PostgreSQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience with generative AI or retrieval-augmented generation (RAG) frameworks in pharma/biotech setting Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good to Have Skills Willingness to work on Full stack Applications Experience working with biomedical or scientific data (e.g., PubMed, clinical trial registries, internal regulatory databases) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.

Posted 1 month ago

Apply

5 - 9 years

20 - 25 Lacs

Pune, Gurugram, Bengaluru

Hybrid

Naukri logo

We're looking for a motivated and detail-oriented Senior Snowflake Developer with strong SQL querying skills and a willingness to learn and grow with our team. As a Senior Snowflake Developer, you will play a key role in developing and maintaining our Snowflake data platform, working closely with our data engineering and analytics teams. Responsibilities: Write ingestion pipelines that are optimized and performant Manage a team of Junior Software Developers Write efficient and scalable SQL queries to support data analytics and reporting Collaborate with data engineers, architects and analysts to design and implement data pipelines and workflows Troubleshoot and resolve data-related issues and errors Conduct code reviews and contribute to the improvement of our Snowflake development standards Stay up-to-date with the latest Snowflake features and best practices Requirements: 5+ years of experience with Snowflake Strong SQL querying skills, including data modeling, data warehousing, and ETL/ELT design Advanced understanding of data engineering principles and practices Familiarity with Informatica Intelligent Cloud Services (IICS) or similar data integration tools is a plus Excellent problem-solving skills, attention to detail, and analytical mindset Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams Nice to Have: Experience using Snowflake Streamlit, Cortex Knowledge of data governance, data quality, and data security best practices Familiarity with Agile development methodologies and version control systems like Git Certification in Snowflake or a related data platform is a plus

Posted 1 month ago

Apply

2 - 3 years

5 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, develop, and maintain scalable ETL/ELT data pipelines Work with structured and unstructured data from various sources (APIs, databases, cloud storage, etc.) Optimize data workflows and ensure data quality, consistency, and reliability Collaborate with cross-functional teams to understand data requirements and deliver solutions Maintain and improve our data infrastructure and architecture Monitor pipeline performance and troubleshoot issues in real-time Preferred candidate profile 2-3 years of experience in data engineering or a similar role Proficiency in SQL and Python (or Scala/Java for data processing) Experience with ETL tools (e.g., Airflow, dbt, Luigi) Familiarity with cloud platforms like AWS, GCP, or Azure Hands-on experience with data warehouses (e.g., Redshift, BigQuery, Snowflake) Knowledge of distributed data processing frameworks like Spark or Hadoop Experience with version control systems (e.g., Git) Exposure to data modeling and schema design Experience working with CI/CD pipelines for data workflows Understanding of data privacy and security practices

Posted 1 month ago

Apply

13 - 20 years

45 - 50 Lacs

Pune

Work from Office

Naukri logo

About The Role : Job Title Solution Architect, VP Location Pune, India Role Description As a solution architect supporting the individual business aligned, strategic or regulatory book of work, you will be working closely with the projects' business stakeholders, business analysts, data solution architects, engineers, the lead solution architect, the principal architect and the Chief Architect to help projects break out business requirements into implementable building blocks and design the solution's target architecture. It is the solution architect's responsibility to align the design against the general (enterprise) architecture principles, apply agreed best practices and patterns and help the engineering teams deliver against the architecture in an event driven, service oriented environment. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities In projects work with SMEs and stakeholders deriving the individual components of the solution Design the target architecture for a new solution or when adding new capabilities to an existing solution. Assure proper documentation of the High-Level Design and Low Level design of the delivered solution Quality assures the delivery against the agreed and approved architecture, i.e. provide delivery guidance and governance Prepare the High-Level Design for review and approval by design authorities for projects to proceed into implementation Support creation of the Low-Level Design as it is being delivered to support final go live Your skills and experience Very proficient at designing / architecting solutions in an event driven environment leveraging service oriented principles Proficient at Java and the delivery of Spring based services Proficient at building systems in a decoupled, event driven environment leveraging messaging / streaming, i.e Kafka Very good experience designing and implementing data streaming and data ingest pipelines for heavy throughput while providing different guarantees (At least once, Exactly once) Very good understanding of non-streaming ETL and ELT approaches for data ingest Solid understanding of containerized, distributed systems and building auto-scalable, state-less services in a cluster (Concepts of Quorum, Consensus) Solid understanding of standard RDBM systems and a proficiency at data engineering level in T-SQL and/or PL/SQL and/or PL/pgSQL, preferably all Good understanding of how RBMS generally work but also specific tuning experience on SQL Server, Oracle or PostgreSQL are welcome Understanding of modeling / implementing different data modelling approaches as well as understanding of respective pros and cons (e.g. Normalized, Denormalized, Star, Snowflake, DataVault 2.0, ...) A strong working experience on GCP architecture is a benefit (APIGee, BigQuery, Pub/Sub, Cloud SQL, DataFlow, ...) - appropriate GCP architecture level certification even more so Experience leveraging other languages more than welcome (C#, Python) How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm

Posted 1 month ago

Apply

3 - 5 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description Analyze, design develop, fix and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC2 Responsibilities Minimum 4-5 years hands-on, end to end DWH Implementation experience using ODI. Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization, workflow design, etc. Expertise in the Oracle ODI tool set and Oracle PL/SQL,ODI knowledge of ODI Master and work repository Knowledge of data modelling and ETL design Setting up topology, building objects in Designer, Monitoring Operator, different type of KMs, Agents etc Packaging components, database operations like Aggregate pivot, union etc. using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects Design and develop complex mappings, Process Flows and ETL scripts Must be well versed and hands-on in using and customizing Knowledge Modules (KM) Experience of performance tuning of mappings Ability to design ETL unit test cases and debug ETL Mappings Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Experience on Error recycling / management using ODI,PL/SQL Expertise in database development (SQL/ PLSQL) for PL/SQL based applications. Experience of creating PL/SQL packages, procedures, Functions , Triggers, views, Mat Views and exception handling for retrieving, manipulating, checking and migrating complex datasets in oracle Experience in Data Migration using Sql loader, import/export Experience in SQL tuning and optimization using explain plan and Sql trace files. Strong knowledge of ELT/ETL concepts, design and coding Partitioning and Indexing strategy for optimal performance Must have Good verbal and written communication in English, and good interpersonal, analytical and problem-solving abilities. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High and Low level design documents. Experience in understanding sophisticated source system data structures preferably in Financial services (preferred) Industry Ability to work with minimal guidance or supervision in a time critical environment. Work with Oracle's world class technology to develop, implement, and support Oracle's global infrastructure. As a member of the IT organization, assist with the design, development, modifications, debugging, and evaluation of programs for use in internal systems within a specific function area. Duties and tasks are standard with some variation. Completes own role largely independently within defined policies and procedures. BS or equivalent experience in programming on enterprise or department servers or systems. Life at Oracle and Equal Opportunity:An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry.In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation.Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion, and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles to perform crucial job functions.Thats why were committed to creating a workforce where all individuals can do their best work. Its when everyones voice is heard and valued that were inspired to go beyond whats been done before.Disclaimer: Oracle is an Equal Employment Opportunity Employer*. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.Which includes being a United States Affirmative Action Employerhttps://www.oracle.com/corporate/careers/diversity-inclusion/

Posted 1 month ago

Apply

5 - 10 years

13 - 18 Lacs

Mumbai

Work from Office

Naukri logo

About The Role RoleProduct Manager DurationFull Time LocationEdison, NJ / NYC, NY Mode Hybrid ( 3days WFO & 2 Days WFH) NEED Private Equity / Fund Administrations / Hedge Fund Experience. New York City based Private Equity Fund Administration Firm is looking for a product manager to assist in the implementation of client-facing technology products. This is a high visibility role, with access to senior management and founding members. Primary Responsibilities Will Include: Work closely with SME and management to understand product scope Document current & future state (e.g., functionality, data flow, reports, UX, etc.) Solicit and document requirements from business users Design and document future enhancements to Clients Private Exchange platform Own the platform transaction flow process, design, and development roadmap Assist the client onboarding & implementation teams with technology adoption Define automated and manual testing processes to ensure quality of data Liaise with technologists to further understand and document processes Define Success (aka Acceptance) Criteria to ensure all requirements are fulfilled Build and execute comprehensive software test plans User guide documentation Manage technology deployments and conversions Support technology solutions Manage sub-projects Job Requirements, Skills, Education and Experience: Bachelors degree required. Degree in Accounting, Finance, or Economics, a plus 5+ years of business analysis or business systems analysis experience 2+ years of financial services experience Private Equity experience a plus Extensive experience with Private Equity systems such as Allvue, Investran or eFront Performance, Analytics or Business Intelligence experience a plus Familiar with Agile SDLC and/or Project Management methods a plus Experience with data integration and mapping (e.g., ETL, ELT, etc.) a plus Familiarity with development languages a plus (e.g., VBA, SQL, Python, Java, C++, etc.) Extensive Microsoft Office skills - Excel, Word, Visio, PowerPoint Strong verbal and written communication skills Strong attention to detail and accuracy Ability to learn on-the-job quickly, apply learning to recommend solutions to issues Ability to quickly adapt to changes in prioritization, processes, and procedures Superior problem solving, judgement and decision-making skills Ability to think independently, prioritize, multi-task and meet deadlines

Posted 1 month ago

Apply

5 - 10 years

0 Lacs

Mysore, Bengaluru, Kochi

Hybrid

Naukri logo

Open & Direct Walk-in Drive event | Hexaware technologies Snowflake & Python - Data Engineer/Architect in Bangalore, Karnataka on 12th April [Saturday] 2025 - Snowflake/ Python/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as a Data Engineer/Architect. We are hosting an Open Walk-in Drive in Bangalore, Karnataka on 12th April [Saturday] 2025 , and we believe your skills in Snowflake/ SNOWPARK / Python/ SQL & Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 12th April [Saturday] 2025 Experience 4 years to 12 years Time: 9.00 AM to 5 PM Venue: Hotel Grand Mercure Bangalore 12th Main Rd, 3rd Block, Koramangala 3 Block, Koramangala, Bengaluru, Karnataka 560034 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Work Location: Open (Hyderabad/Bangalore / Pune/ Mumbai/ Noida/ Dehradun/ Chennai/ Coimbatore) Key Skills and Experience: As a Data Engineer, we are looking for candidates who possess expertise in the following: SNOWFLAKE Python Fivetran SNOWPARK & SNOWPIPE SQL Pyspark/Spark DWH Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: 4 - 15 years of Total IT experience on any ETL/Snowflake cloud tool. Min 3 years of experience in Snowflake Min 3 year of experience in query and processing data using python. Strong SQL with experience in using Analytical functions, Materialized views, and Stored Procedures Experience in Data loading features of Snowflake like Stages, Streams, Tasks, and SNOWPIPE. Working knowledge on Processing Semi-Structured data What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********

Posted 2 months ago

Apply

5 - 10 years

0 Lacs

Pune, Nagpur, Mumbai (All Areas)

Hybrid

Naukri logo

Open & Direct Walk-in Drive event | Hexaware technologies SNOWFLAKE & SNOWPARK Data Engineer/Architect in Pune, Maharashtra on 5th April [Saturday] 2025 - Snowflake/ Snowpark/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as a Data Engineer/Architect. We are hosting an Open Walk-in Drive in Pune, Maharashtra on 5th April [Saturday] 2025 , and we believe your skills in Snowflake/ SNOWPARK / Python/ SQL & Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 5th April [Saturday] 2025 Experience – 4 years to 12 years Time: 9.00 AM to 5 PM Venue: Hexaware Technologies Limited, Phase 3, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi, Pimpri-Chinchwad, Pune, Maharashtra 411057 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Work Location: Open (Hyderabad/Bangalore / Pune/ Mumbai/ Noida/ Dehradun/ Chennai/ Coimbatore) Key Skills and Experience: As a Data Engineer, we are looking for candidates who possess expertise in the following: SNOWFLAKE Python Fivetran SNOWPARK & SNOWPIPE SQL Pyspark/Spark DWH Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: 4 - 15 years of Total IT experience on any ETL/Snowflake cloud tool. Min 3 years of experience in Snowflake Min 3 year of experience in query and processing data using python. Strong SQL with experience in using Analytical functions, Materialized views, and Stored Procedures Experience in Data loading features of Snowflake like Stages, Streams, Tasks, and SNOWPIPE. Working knowledge on Processing Semi-Structured data What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********

Posted 2 months ago

Apply

7 - 12 years

20 - 35 Lacs

Bengaluru

Remote

Naukri logo

Hello Professionals! Hiring for MNC #Fulltime #Permanent Location: Remote Designation: Tableau Developer Mandatory Skills: Tableau, SQL, ELT, Create dashboards related to Marketing Experience: 7+ Years Shift: US shift (Flexible) Looking for a Tableau Developer who have current experience in Create tableau dashboards related to Marketing like: 1. End to End Pipe Conversion Dash (Lead to ARR conversion by stage) 2. Marketing Campaign Effectiveness Dash (Pipe Gen ROI and effectiveness trending) 3. Pipeline Co-creation Dash (Impact of Marketing on Pipe Gen from other sources) Note: Looking for Immediate to15 Days of Currently Serving Notice Period Candidates Feel free to Connect with me on arpita.subudhi@srsconsultinginc.com for any clarification that you might be needed

Posted 2 months ago

Apply

10 - 14 years

25 - 30 Lacs

Mumbai

Work from Office

Naukri logo

Overview of the Role: We have an outstanding opportunity for an expert AI and Data Cloud Solutions Engineer to work with our trailblazing customers in crafting ground-breaking customer engagement roadmaps demonstrating the Salesforce applications, platform across the machine learning and LLM/GPT domains in India! The successful applicant will have a track record in driving business outcomes through technology solutions, with experience in engaging at the C-level with Business and Technology groups.' Responsibilities: Primary pre-sales technical authority for all aspects of AI usage within the Salesforce product portfolio - existing Einstein ML based capabilities and new (2023) generative AI Majority of time (60%+) will be customer/external facing Evangelisation of Salesforce AI capabilities Assessing customer requirements and use cases and aligning to these capabilities Solution proposals, working with Architects and wider Solution Engineer (SE) teams Building reference models/ideas/approaches for inclusion of GPT based products within wider Salesforce solution architectures, especially involving Data Cloud Alignment with customer security and privacy teams on trust capabilities and values of our solution(s) Presenting at multiple customer events from single account sessions through to major strategic events (World Tour, Dreamforce) Representing Salesforce at other events (subject to PM approval) Sales and SE organisation education and enablement e.g. roadmap - all roles across all product areas Bridge/primary contact point to product management Provide thought leadership in how large enterprise organisation can drive customer success through digital transformation. Ability to uncover the challenges and issues a business is facing by running successful and targeted discovery sessions and workshops. Be an innovator who can build new solutions using out-of-the-box thinking. Demonstrate business value of our AI solutions to business using solution presentations, demonstrations and prototypes. Build roadmaps that clearly articulate how partners can implement and accept solutions to move from current to future state. Deliver functional and technical responses to RFPs/RFIs. Work as an excellent teammate by chipping in, learning and sharing new knowledge. Demonstrate a conceptual knowledge of how to integrate cloud applications to existing business applications and technology. Lead multiple customer engagements concurrently. Be self-motivated, flexible, and take initiative.| Required Qualifications: Experience will be evaluated based on the core proficiencies of the role. 4+ years working directly in the commercial technology space with AI products and solutions. Data knowledge - Data science, Data lakes and warehouses, ETL, ELT, data quality AI knowledge - application of algorithms and models to solve business problems (ML, LLMs, GPT) 10+ years working in a sales, pre-sales, consulting or related function in a commercial software company Strong focus and experience in pre-sales or implementation is required. Experience in demonstrating Customer engagement solution, understand and drive use cases, customer journeys, ability to draw Day in life of across different LOBs. Business Analysis/ Business case/return on investment construction. Demonstrable experience in presenting and communicating complex concepts to large audiences A broad understanding of and ability to articulate the benefits of CRM, Sales, Service and Marketing cloud offerings Strong verbal and written communications skills with a focus on needs analysis, positioning, business justification, and closing techniques. Continuous learning demeanor with a demonstrated history of self enablement and advancement in both technology and behavioural areas. Preferred Qualifications: expertise in an AI related subject (ML, deep learning, NLP etc.) Familiar with technologies such as OpenAI, Google Vertex, Amazon Sagemaker, Snowflake, Databricks etc

Posted 2 months ago

Apply

10 - 14 years

25 - 30 Lacs

Mumbai

Work from Office

Naukri logo

Overview of the Role: We have an outstanding opportunity for an expert AI and Data Cloud Solutions Engineer to work with our trailblazing customers in crafting ground-breaking customer engagement roadmaps demonstrating the Salesforce applications, platform across the machine learning and LLM/GPT domains in India! The successful applicant will have a track record in driving business outcomes through technology solutions, with experience in engaging at the C-level with Business and Technology groups. Responsibilities: Primary pre-sales technical authority for all aspects of AI usage within the Salesforce product portfolio - existing Einstein ML based capabilities and new (2023) generative AI Majority of time (60%+) will be customer/external facing Evangelisation of Salesforce AI capabilities Assessing customer requirements and use cases and aligning to these capabilities Solution proposals, working with Architects and wider Solution Engineer (SE) teams Building reference models/ideas/approaches for inclusion of GPT based products within wider Salesforce solution architectures, especially involving Data Cloud Alignment with customer security and privacy teams on trust capabilities and values of our solution(s) Presenting at multiple customer events from single account sessions through to major strategic events (World Tour, Dreamforce) Representing Salesforce at other events (subject to PM approval) Sales and SE organisation education and enablement e.g. roadmap - all roles across all product areas Bridge/primary contact point to product management Provide thought leadership in how large enterprise organisation can drive customer success through digital transformation. Ability to uncover the challenges and issues a business is facing by running successful and targeted discovery sessions and workshops. Be an innovator who can build new solutions using out-of-the-box thinking. Demonstrate business value of our AI solutions to business using solution presentations, demonstrations and prototypes. Build roadmaps that clearly articulate how partners can implement and accept solutions to move from current to future state. Deliver functional and technical responses to RFPs/RFIs. Work as an excellent teammate by chipping in, learning and sharing new knowledge. Demonstrate a conceptual knowledge of how to integrate cloud applications to existing business applications and technology. Lead multiple customer engagements concurrently. Be self-motivated, flexible, and take initiative.| Required Qualifications: Experience will be evaluated based on the core proficiencies of the role. 4+ years working directly in the commercial technology space with AI products and solutions. Data knowledge - Data science, Data lakes and warehouses, ETL, ELT, data quality AI knowledge - application of algorithms and models to solve business problems (ML, LLMs, GPT) 10+ years working in a sales, pre-sales, consulting or related function in a commercial software company Strong focus and experience in pre-sales or implementation is required. Experience in demonstrating Customer engagement solution, understand and drive use cases, customer journeys, ability to draw Day in life of across different LOBs. Business Analysis/ Business case/return on investment construction. Demonstrable experience in presenting and communicating complex concepts to large audiences A broad understanding of and ability to articulate the benefits of CRM, Sales, Service and Marketing cloud offerings Strong verbal and written communications skills with a focus on needs analysis, positioning, business justification, and closing techniques. Continuous learning demeanor with a demonstrated history of self enablement and advancement in both technology and behavioural areas. Preferred Qualifications: expertise in an AI related subject (ML, deep learning, NLP etc.) Familiar with technologies such as OpenAI, Google Vertex, Amazon Sagemaker, Snowflake, Databricks etc

Posted 2 months ago

Apply

7 - 11 years

15 - 19 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 2 months ago

Apply

13 - 20 years

45 - 50 Lacs

Pune

Work from Office

Naukri logo

About The Role : Job Title Solution Architect, VP Location Pune, India Role Description As a solution architect supporting the individual business aligned, strategic or regulatory book of work, you will be working closely with the projects' business stakeholders, business analysts, data solution architects, engineers, the lead solution architect, the principal architect and the Chief Architect to help projects break out business requirements into implementable building blocks and design the solution's target architecture. It is the solution architect's responsibility to align the design against the general (enterprise) architecture principles, apply agreed best practices and patterns and help the engineering teams deliver against the architecture in an event driven, service oriented environment. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities In projects work with SMEs and stakeholders deriving the individual components of the solution Design the target architecture for a new solution or when adding new capabilities to an existing solution. Assure proper documentation of the High-Level Design and Low Level design of the delivered solution Quality assures the delivery against the agreed and approved architecture, i.e. provide delivery guidance and governance Prepare the High-Level Design for review and approval by design authorities for projects to proceed into implementation Support creation of the Low-Level Design as it is being delivered to support final go live Your skills and experience Very proficient at designing / architecting solutions in an event driven environment leveraging service oriented principles Proficient at Java and the delivery of Spring based services Proficient at building systems in a decoupled, event driven environment leveraging messaging / streaming, i.e Kafka Very good experience designing and implementing data streaming and data ingest pipelines for heavy throughput while providing different guarantees (At least once, Exactly once) Very good understanding of non-streaming ETL and ELT approaches for data ingest Solid understanding of containerized, distributed systems and building auto-scalable, state-less services in a cluster (Concepts of Quorum, Consensus) Solid understanding of standard RDBM systems and a proficiency at data engineering level in T-SQL and/or PL/SQL and/or PL/pgSQL, preferably all Good understanding of how RBMS generally work but also specific tuning experience on SQL Server, Oracle or PostgreSQL are welcome Understanding of modeling / implementing different data modelling approaches as well as understanding of respective pros and cons (e.g. Normalized, Denormalized, Star, Snowflake, DataVault 2.0, ...) A strong working experience on GCP architecture is a benefit (APIGee, BigQuery, Pub/Sub, Cloud SQL, DataFlow, ...) - appropriate GCP architecture level certification even more so Experience leveraging other languages more than welcome (C#, Python) How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm

Posted 2 months ago

Apply

3 - 5 years

4 - 9 Lacs

Gurgaon

Work from Office

Naukri logo

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. At AHEAD, we prioritize creating a culture of belonging, where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer, and do not discriminate based on an individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, marital status, or any other protected characteristic under applicable law, whether actual or perceived. We embrace all candidates that will contribute to the diversification and enrichment of ideas and perspectives at AHEAD. Data Engineer (Internally known as a Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP

Posted 2 months ago

Apply

4 - 9 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Data Engineer Graph – Research Data and Analytics What you will do Let’s do this. Let’s change the world. In this vital role you will be part Research’s Semantic Graph Team is seeking a qualified individual to design, build, and maintain solutions for scientific data that drive business decisions for Research. The successful candidate will construct scalable and high-performance data engineering solutions for extensive scientific datasets and collaborate with Research partners to address their data requirements. The ideal candidate should have experience in the pharmaceutical or biotech industry, leveraging their expertise in semantics, taxonomies, and linked data principles to ensure data harmonization and interoperability. Additionally, this individual should demonstrate robust technical skills, proficiency with data engineering technologies, and a thorough understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Develop and maintain semantic data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve [complex] data-related challenges Adhere to standard processes for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. T Basic Qualifications and Experience: Doctorate Degree OR Master’s degree with 2- 4years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 4- 6years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 7- 9 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications and Experience: 4+ years of experience in designing and supporting biopharma scientific research data analytics (software platforms) Functional Skills: Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with data technologies and platforms, such as Databricks, workflow orchestration, performance tuning on big data processing. Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience with system administration skills, such as managing Linux and Windows servers, configuring network infrastructure, and automating tasks with shell scripting. Examples include setting up and maintaining virtual machines, troubleshooting server issues, and ensuring data security through regular updates and backups. Solid understanding of data modeling, data warehousing, and data integration concepts Solid experience using RDBMS (e.g. Oracle, MySQL, SQL server, PostgreSQL) Knowledge of cloud data platforms (AWS preferred) Experience with data visualization tools (e.g. Dash, Plotly, Spotfire) Experience with diagramming and collaboration tools such as Miro, Lucidchart or similar tools for process mapping and brainstorming Experience writing and maintaining user documentation in Confluence Understanding of data governance frameworks, tools, and standard processes Professional Certifications: Databricks Certified Data Engineer Professional preferred Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

3 - 5 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Sr Data Engineer What you will do Let’s do this. Let’s change the world. In this vital role you will design, build and maintain data lake solutions for scientific data that drive business decisions for Research. You will build scalable and high-performance data engineering solutions for large scientific datasets and collaborate with Research customers Design, develop, and implement data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, run scope, timelines, and risks Develop and maintain data models for biopharma scientific data, data dictionaries, and other documentation to ensure data accuracy and consistency Optimize large datasets for query performance Collaborate with global multi-functional teams including research scientists to understand data requirements and design solutions that meet business needs Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate with Data Architects, Business SMEs, Software Engineers and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve [complex] data-related challenges Adhere to standard methodologies for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Maintain comprehensive documentation of processes, systems, and solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The ideal candidate possesses experience in the pharmaceutical or biotech industry, demonstrates deep technical skills, is proficient with big data technologies, and has a deep understanding of data architecture and ETL processes. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications: 3+ years of experience in designing and supporting biopharma scientific research data pipelines. Must-Have Skills: Proficiency in SQL and Python for data engineering, test automation frameworks (pytest), and scripting tasks Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Excellent problem-solving skills and the ability to work with large, complex datasets Good-to-Have Skills: A passion for tackling complex challenges in drug discovery with technology and data Experience writing and maintaining technical documentation in Confluence Professional Certifications: Databricks Certified Data Engineer Professional preferred What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies