Home
Jobs

1885 Data Engineering Jobs - Page 26

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Strong proficiency in Tableau Desktop Strong proficiency in Tableau Server. Strong proficiency in Tableauand SQL. Experience with SQL and data manipulation Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 Years. 3 Years + of relevant exp. Goodhands on experience in Tableau. Should be confident in tableau visualization skills Preferred technical and professional experience Effective communication and presentation skills. Industry expertise / specialization

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 2 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them NoSql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams.

Posted 1 week ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Very good experience on Continuous Flow Graph tool used for point based development. Design, develop, and maintain ETL processes using Ab Initio tools. Write, test, and deploy Ab Initio graphs, scripts, and other necessary components. Troubleshoot and resolve data processing issues and improve performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Over all 8 Years and Relevant 5+ years Extract, transform, and load data from various sources into data warehouses, operational data stores, or other target systems. Work with different data formats, including structured, semi-structured, and unstructured data Preferred technical and professional experience Effective communication and presentation skills. Industry expertise / specialization

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

What You ll Do: Write complex SQL queries for data extraction, perform exploratory data analysis (EDA) to uncover insights. Strong proficiency in Python and Py Spark for scalable data processing and analytics. Create, transform, and optimize features to enhance model performance. Train, evaluate, and maintain machine learning models in production. Write efficient, maintainable, and version-controlled code that handles large datasets. Regularly update internal teams and clients on project progress, results, and insights. Conduct hypothesis testing and experiment analysis to drive data-driven decisions using AB testing. Scaling machine learning algorithms to work on massive data sets and strict SLAs. Automate operations pipeline which runs on regular intervals to update required datasets. What You ll Bring: A master s or bachelor s degree in computer science or a related field from a top university. 4+ years of hands-on experience in Machine Learning (ML) or Data Science with a focus on building scalable solutions. Strong programming expertise in Python and PySpark is must. Proven ability to write highly optimized SQL queries for efficient data extraction and transformation. Experience in feature engineering, inferencing pipelines, and real-time model prediction deployment. Strong fundamentals in applied statistics , with expertise in A/B test design and hypothesis testing. Solid understanding of distributed computing systems and hands-on experience with at least one cloud platform (GCP, AWS, or Azure) Additional Skills Understanding of Git, DevOps, CI / CD, data security, experience in designing on cloud platform. Experienced in automating operations using job scheduler like Airflow Experience in data engineering in Big Data systems

Posted 1 week ago

Apply

6.0 - 8.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are seeking a highly skilled and hands-on Senior Software Engineer Search to drive the development of intelligent, scalable search systems across our pharmaceutical organization. You'll work at the intersection of software engineering, AI, and life sciences to enable seamless access to structured and unstructured contentspanning research papers, clinical trial data, regulatory documents, and internal scientific knowledge. This is a high-impact role where your code directly accelerates innovation and decision-making in drug development and healthcare delivery Design, implement, and optimize search services using technologies such as Elasticsearch, OpenSearch, Solr, or vector search frameworks. Collaborate with data scientists and analysts to deliver data models and insights. Develop custom ranking algorithms, relevancy tuning, and semantic search capabilities tailored to scientific and medical content Support the development of intelligent search features like query understanding, question answering, summarization, and entity recognition Build and maintain robust, cloud-native APIs and backend services to support high-availability search infrastructure (e.g., AWS, GCP, Azure Implement CI/CD pipelines, observability, and monitoring for production-grade search systems Work closely with Product Owners, Tech Architect. Enable indexing of both structured (e.g., clinical trial metadata) and unstructured (e.g., PDFs, research papers) content Design & develop modern data management tools to curate our most important data sets, models and processes, while identifying areas for process automation and further efficiencies Expertise in programming languages such as Python, Java, React, typescript, or similar. Strong experience with data storage and processing technologies (e.g., Hadoop, Spark, Kafka, Airflow, SQL/NoSQL databases). Demonstrate strong initiative and ability to work with minimal supervision or direction Strong experience with cloud infrastructure (AWS, Azure, or GCP) and infrastructure as code like Terraform In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (e.g. star schema, entitlement implementations, SQL v/s NoSQL modeling, milestoning, indexing, partitioning) Experience in REST and/or GraphQL Experience in creating Spark jobs for data transformation and aggregation Experience with distributed, multi-tiered systems, algorithms, and relational databases. Possesses strong rapid prototyping skills and can quickly translate concepts into working code Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Analyze and understand the functional and technical requirements of applications Identify and resolve software bugs and performance issues Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain detailed documentation of software designs, code, and development processes Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience Proficient in Databricks, Data engineering, Python, Search algorithms using NLP/AI models, GCP Cloud services, GraphQL Hands-on experience with search technologies (Elasticsearch, Solr, OpenSearch, or Lucene). Hands on experience with Full Stack software development. Proficient in programming languages, Java, Python, Fast Python, Databricks/RDS, Data engineering, S3Buckets, ETL, Hadoop, Spark, airflow, AWS Lambda Experience with data streaming frameworks (Apache Kafka, Flink). Experience with cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., S3, Redshift, Big Query, Databricks) Hands on experience with various cloud services, understand pros and cons of various cloud services in well architected cloud design principles Working knowledge of open-source tools such as AWS lambda. Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in Python, Java, React, Fast Python, Typescript, JavaScript, CSS HTML is desirable Experienced with API integration, serverless, microservices architecture. Experience in Data bricks, PySpark, Spark, SQL, ETL, Kafka Solid understanding of data governance, data security, and data quality best practices Experience with Unit Testing, Building and Debugging the Code Experienced with AWSAzure Platform, Building and deploying the code Experience in vector database for large language models, Databricks or RDS Experience with DevOps CICD build and deployment pipeline Experience in Agile software development methodologies Experience in End-to-End testing Experience in additional Modern Database terminologies. Good to Have Skills Willingness to work on AI Applications Experience in MLOps, React, JavaScript, Java, GCP Search Engines Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global teams. High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply

8.0 - 10.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will manage and oversee the development of robust Data Architectures, Frameworks, Data product Solutions, while mentoring and guiding a small team of data engineers. You will be responsible for leading the development, implementation, and management of enterprise-level data data engineering frameworks and solutions that support the organization's data-driven strategic initiatives. You will continuously strive for innovation in the technologies and practices used for data engineering and build enterprise scale data frameworks and expert data engineers. This role will closely collaborate with counterparts in US and EU. You will collaborate with cross-functional teams, including platform, functional IT, and business stakeholders, to ensure that the solutions that are built align with business goals and are scalable, secure, and efficient. Roles & Responsibilities: Architect & Implement of scalable, high-performance Modern Data Engineering solutions (applications) that include data analysis, data ingestion, storage, data transformation (data pipelines), and analytics. Evaluate the new trends in data engineering area and build rapid prototypes Build Data Solution Architectures and Frameworks to accelerate the Data Engineering processes Build frameworks to improve the re-usability, reduce the development time and cost of data management & governance Integrate AI into data engineering practices to bring efficiency through automation Build best practices in Data Engineering capability and ensure their adoption across the product teams Build and nurture strong relationships with stakeholders, emphasizing value-focused engagement and partnership to align data initiatives with broader business goals. Lead and motivate a high-performing data engineering team to deliver exceptional results. Provide expert guidance and mentorship to the data engineering team, fostering a culture of innovation and best practices. Collaborate with counterparts in US and EU and work with business functions, functional IT teams, and others to understand their data needs and ensure the solutions meet the requirements. Engage with business stakeholders to understand their needs and priorities, ensuring that data and analytics solutions built deliver real value and meet business objectives. Drive adoption of the data and analytics solutions by partnering with the business stakeholders and functional IT teams in rolling out change management, trainings, communications, etc. Talent Growth & People Leadership: Lead, mentor, and manage a high-performing team of engineers, fostering an environment that encourages learning, collaboration, and innovation. Focus on nurturing future leaders and providing growth opportunities through coaching, training, and mentorship. Recruitment & Team Expansion: Develop a comprehensive talent strategy that includes recruitment, retention, onboarding, and career development and build a diverse and inclusive team that drives innovation, aligns with Amgen's culture and values, and delivers business priorities Organizational Leadership: Work closely with senior leaders within the function and across the Amgen India site to align engineering goals with broader organizational objectives and demonstrate leadership by contributing to strategic discussions What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Masters degree and 8 to 10 years of computer science and engineering preferred, other Engineering fields will be considered OR Bachelors degree and 12 to 14 years of computer science and engineering preferred, other Engineering fields will be considered OR Diploma and 16 to 18 years of computer science and engineering preferred, other Engineering fields will be considered 10+ years of experience in Data Engineering, working in COE development or product building 5+ years of experience in leading enterprise scale data engineering solution development. Experience building enterprise scale data lake, data fabric solutions on cloud leveraging modern approaches like Data Mesh Demonstrated proficiency in leveraging cloud platforms (AWS, Azure, GCP) for data engineering solutions. Strong understanding of cloud architecture principles and cost optimization strategies. Hands-on experience using Databricks, Snowflake, PySpark, Python, SQL Proven ability to lead and develop high-performing data engineering teams. Strong problem-solving, analytical, and critical thinking skills to address complex data challenges. Preferred Qualifications: Experience in Integrating AI with Data Engineering and building AI ready data lakes Prior experience in data modeling especially star-schema modeling concepts. Familiarity with ontologies, information modeling, and graph databases. Experience working with agile development methodologies such as Scaled Agile. Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops. Education and Professional Certifications SAFe for Teams certification (preferred) Databricks certifications AWS cloud certification Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 27 Lacs

Pune

Hybrid

Naukri logo

6+yrs of experience as a Data Engineer, Expertise in the Azure platform Azure SQL DB, ADF and Azure Synapse, 5+ years of experience in database development using SQL, knowledge of data modeling, ETL processes and data warehouse design principles.

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Chennai

Hybrid

Naukri logo

Senior Data Architect GCP | 8 - 12 Yrs | Chennai Location: Chennai Notice Period: Immediate / Serving Notice / 30 Days Max Experience: 8 - 12 Years Employment Type: Full Time About the Role: Join our dynamic Materials Management Platform (MMP) This platform is redefining how we plan and manage inventory across Product Development, Manufacturing, Finance, Purchasing, and N-Tier Supply Chain systems. We are looking for a highly experienced Data Architect who excels in designing and deploying Data-Centric Architectures on GCP . Youll work across modern and legacy ecosystems , building highly scalable, secure, and efficient data solutions that power real-time and batch operations. Must-Have Skills (Top Priority): GCP (Google Cloud Platform) Core expertise Data Architecture & Engineering Strong foundations in large-scale design BigQuery, GCP Pub/Sub, Airflow Java / Python / Spark / SQL / Scala Streaming & Batch Pipelines Microservices, REST APIs DevOps Tools: Terraform, GitHub Actions, Tekton, Docker RDBMS: MySQL, PostgreSQL, SQL Server Good to Have: Cloud Solution Architecture certification Automotive domain experience Onshore-offshore collaboration experience Agile methodology exposure (JIRA) Education & Background: Bachelor's or equivalent in Computer Science / IT / Engineering 8+ years in data engineering / cloud software development Proven experience in launching data products at scale #GCP

Posted 1 week ago

Apply

5.0 - 9.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.

Posted 1 week ago

Apply

5.0 - 7.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Role Overview We are seeking an experienced .NET Backend Developer with strong Azure Data Engineering skills to join our growing team in Hyderabad. You will work closely with cross-functional teams to build scalable backend systems, modern APIs, and data pipelines using cutting-edge tools like Azure Databricks and MS Fabric. Technical Skills (Must-Have) Strong hands-on experience in C#, SQL Server, and OOP Concepts Proficiency with .NET Core, ASP.NET Core, Web API, Entity Framework (v6 or above) Strong understanding of Microservices Architecture Experience with Azure Cloud technologies including Data Engineering, Azure Databricks, MS Fabric, Azure SQL, Blob Storage, etc. Experience with Snowflake or similar cloud data platforms Experience working with NoSQL databases Skilled in Database Performance Tuning and Design Patterns Working knowledge of Agile methodologies Ability to write reusable libraries and modular, maintainable code Excellent verbal and written communication skills (especially with US counterparts) Strong troubleshooting and debugging skills Nice to Have Skills Experience with Angular, MongoDB, NPM Familiarity with Azure DevOps CI/CD pipelines for build and release configuration Self-starter attitude with strong analytical and problem-solving abilities Willingness to work extra hours when needed to meet tight deadlines Why Join UsWork with a passionate, high-performing team Opportunity to grow your technical and leadership skills in a dynamic environment Be part of global digital transformation initiatives with top-tier clients Exposure to real-world enterprise data systems Opportunity to work on cutting-edge Azure and cloud technologies Performance-based growth & internal mobility opportunities

Posted 1 week ago

Apply

5.0 - 9.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.

Posted 1 week ago

Apply

4.0 - 8.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Salary: 20 to 35 LPA Exp: 4 to 8 years Location: Gurgaon Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer

Posted 1 week ago

Apply

10.0 - 15.0 years

10 - 15 Lacs

Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

The incumbent should be a Bachelor's or Master's (graduate) degree in Business Administration (MBA), sales, marketing or equivalent combination of education and experience (Pharma qualifications will be an added advantage) with minimum of 10 + years experience in Sales and Marketing (Client Relationship Management) in Health Care / Pharma / / IT Services / Software / Any other industries). Job Title : Sales Manager (Client Relationship Management) Reporting To: Managing Director, India Location : Mumbai Product: SASS Product & Data Management services to Life science and pharma industries. Sales Experience: B to B Client Relationship Management (Especially Pharma and life science companies) Job Description: A challenging and growing position as Sales Manager, India with the opportunity to work in one of the emerging global organizations with services in the healthcare, life sciences and energy industries, and the chance to professionalize the organization via innovation. Position would be more of client relation with our existing client in terms of contract renewals, rates confirmation, and building new business with existing client and new clients. Key Responsibilities: The primary responsibilities of this position are: Developing, churning / converting a healthy Sales Funnel with a Weighted Average Value as per your set target in Solutions & Services. Scheduling appointments, meeting prospects and clients in identifying and qualifying potential new opportunities and present current services and capability offers. Developing and managing your sales funnel for GVWs India markets (Healthcare, Life Sciences & Energy), as well as other connected services. Drive each sales opportunity with GVWs Sales Enabling team, and related Service line(s); Both externally as well as GVW Internally, including GVW Value Partners Maintain and build strong, sustainable and lasting client relations. Supervise GVWs sales opportunities through GVWs value chain; from client order through to client invoicing and payment collections. Planning and conducting sales presentations physically and virtually on a timely basis. Liaise with clients for up-to-date pricing, services, and latest service(s) introductions. Identifying and qualifying prospective clients through market and prospect research, networking, events participation and organization, webinars, and other lead generating opportunities, e.g. cold calling, mail campaigns, and GVWs lead gen channel(s). Continually updating all prospects and clients on GVW service development changes, improvements and performance Maintaining professionalism, diplomacy, sensitivity, and tact to portray GVW in a positive manner. Effectively attending conferences and events, where applicable Using market data in maximizing sales effectiveness and efficiency using relevant sales management tools and leveraging GVWs digital marketing capability Preparing timely reports for management and maintaining accurate expense accounts Updating and maintaining customer account records, in GVW CRM in an effective way QUALIFICATIONS: Bachelor's or Master's (graduate) degree in Business Administration (MBA), sales, marketing or equivalent combination of education and experience Pharma qualifications will be an added advantage. At least 10+ years of full-time experience in a relevant role Other Preferred Requirements: Strong track record of achievement and ability, setting targets and achieving them timely and ethically. Strong negotiation skills through strategic price and fee structuring, ensuring continuation and enhancement of services for a strong win-win relationship with clients. Highly Preferred Life Science / Pharma Sales & Marketing Background Proven Digital Marketing experience will be given preference. Must have excellent English communication. Marathi language skills preferred. Knowledge, Skills & Abilities (KSAs): Proficient in Microsoft Office Suite with detailed knowledge of PowerPoint and CRM tools Customer oriented, well organized with excellent time management skills. Excellent attitude and Team player with good conflict resolution skills Strong interpersonal, analytical and communication skills. Note: Interested candidates meeting the above JD requirements are welcome to apply. Our official Email id: info@pyramidhrc.com. Our contact no: 8903817147 Note: Please go through the complete Job Description and do self assessment before contacting us. Perks and Benefits 20 - 40% Hike from current CTC

Posted 1 week ago

Apply

10.0 - 15.0 years

10 - 20 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

The incumbent should be a Bachelor's or Master's (graduate) degree in Business Administration (MBA), sales, marketing or equivalent combination of education and experience (Pharma qualifications will be an added advantage) with minimum of 10 + years experience in Sales and Marketing in Health Care / Pharma / / IT Services / Software / Any other industries. Job Title : Senior Sales Manager Reporting To: Managing Director, India Location : 1. Chennai 2. Bangalore 3. Hyderabad 4. Mumbai Product: SASS Product & Data Management services to Life science and pharma industries. Sales Experience: B to B (Especially Pharma and life science companies) Note : The chosen candidate will have to travel within India extensively Job Description: The Sales Team drives the business of the client company and with the assistance of the Sales Support team, also manages the external communication and marketing of the company's message. The Sales Manager will be responsible to lead the Sales Support team by providing guidance, training and mentorship in its pre-sales activities. Key Responsibilities: The primary responsibilities of this position are: Sales & Networking: Identifying and qualifying prospective customers through research, networking, events / conferences / trade show participation and other sales activities Contacting potential clients and organize meetings within India giving professional sales presentations during the meetings Developing and managing the local and overseas sales funnels for GVWs markets and services using CRM Providing timely services quotes to customers through proposals, as needed, and following up with contracts on proposal acceptance and finalization - ensuring deal closure Handling existing client accounts and relationships - working closely with the existing clients and looking for cross selling opportunities to increase sales volume Liaising between GVW and the customers for up-to-date pricing, services, and latest product-release launches Planning for & renewing existing contracts before expiry Marketing Activities Planning and overseeing new marketing initiatives, proactively researching organizations and individuals to find new opportunities Using sales management tools and relevant data to maximize sales effectiveness and efficiency Strategic Helping shape new business strategies (end-to-end) - participating in sales growth initiatives of the company Preparing Sales / ad-hoc reports for management Managerial Overseeing the Sales Support Team. Some of the teams key responsibilities are, but not limited to: Supporting the Sales Team in its marketing, sales and brand building activities Conducting Research on Leads generated during Sales efforts or any other market research for New Service Development Overseeing the production of all promotional materials and marketing campaigns in support of events attended by the Sales Managers and Managing Director Generating ideas and creating perfect PowerPoint presentations Ensuring completion of sponsorship reports through coordination with technical and management teams Should have closed deals with clients, created meaningful partnerships and / or alliances during his/her prior role in Business Development (preferably Sale of Services) Strong Research and Deal Origination skills are a must with an Ability to handle end-to-end execution Must be persistent, detail and process oriented for leads & closure of deals Must be a great team player Great Communication Skills Should be able to grasp the crux of discussions and issues and propose relevant solutions Excellent command over English (spoken and written) Should be able to work closely with Technical Leaders to generate their thoughts / services in order to address potential client requirements Highly Preferred Life Science / Pharma Sales & Marketing Background Proven Digital Marketing experience will be given preference Knowledge, Skills & Abilities (KSAs): Proficient in Microsoft Office Suite with detailed knowledge of PowerPoint Work towards the organization of a better Sales Support Team Customer oriented, well organized with excellent time management skills Excellent attitude with good conflict resolution skills Education & Qualifications: Bachelor's or Master's (graduate) degree in Business Administration (MBA), sales, marketing or equivalent combination of education and experience Pharma qualifications will be an added advantage At least 15+ years of full-time experience in a relevant role Education & Qualifications: Bachelor's or Master's (graduate) degree in Business Administration (MBA), sales, marketing or equivalent combination of education and experience Pharma qualifications will be an added advantage At least 15+ years of full-time experience in a relevant role Note: Interested candidates meeting the above JD requirements are welcome to apply. Our official Email id: info@pyramidhrc.com. Our contact no: 8903817147 Note: Please go through the complete Job Description and do self assessment before contacting us. Perks and Benefits 20 - 40% Hike from current CTC

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Naukri logo

Hiring for US based Multinational Company (MNC) We are seeking a skilled and detail-oriented Data Engineer to join our team. In this role, you will design, build, and maintain scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. You will work closely with data scientists, analysts, and software engineers to ensure that high-quality data is readily available and usable. Design and implement scalable, reliable, and efficient data pipelines for processing and transforming large volumes of structured and unstructured data. Build and maintain data architectures including databases, data warehouses, and data lakes. Collaborate with data analysts and scientists to support their data needs and ensure data integrity and consistency. Optimize data systems for performance, cost, and scalability. Implement data quality checks, validation, and monitoring processes. Develop ETL/ELT workflows using modern tools and platforms. Ensure data security and compliance with relevant data protection regulations. Monitor and troubleshoot production data systems and pipelines. Proven experience as a Data Engineer or in a similar role Strong proficiency in SQL and at least one programming language such as Python, Scala, or Java Experience with data pipeline tools such as Apache Airflow, Luigi, or similar Familiarity with modern data platforms and tools: Big Data: Hadoop, Spark Data Warehousing: Snowflake, Redshift, BigQuery, Azure Synapse Databases: PostgreSQL, MySQL, MongoDB Experience with cloud platforms (AWS, Azure, or GCP) Knowledge of data modeling, schema design, and ETL best practices Strong analytical and problem-solving skills

Posted 1 week ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies