Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 4.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Data Ops Support Engineer Responsibilities: Implement and manage continuous integration and deployment pipelines for various applications and services. Proactively monitor data pipelines, system performance and troubleshoot any issues to maintain high availability and reliability of the infrastructure Collaborate with development and business teams to design and implement scalable and resilient solutions. Automate routine tasks and processes to streamline operations and improve efficiency. Conduct in-depth reviews of code and debug issues in data pipelines across multiple applications in production environments, with the ability to perform detailed code analysis and troubleshoot complex issu...
Posted 3 weeks ago
0.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Title: Senior Consultant Snowflake Career Level: D1 Introduction To Role Are you ready to drive innovation and optimize services in the realm of Enterprise Data Management At AstraZeneca, we are on a mission to pioneer scientific breakthroughs and transform patient outcomes. As a Senior Consultant, you will play a pivotal role in enhancing our Enterprise Data Platform and engineering Enterprise Data Assets & Products. Join us in our journey to leverage cutting-edge technologies, scale AI solutions, and unlock the true potential of data. Accountabilities Ensure the service is in Always ON mode and maintain at least 99% service availability for users. Optimize the service for improved pipe...
Posted 3 weeks ago
8.0 - 12.0 years
0 - 2 Lacs
pune
Hybrid
Technology Lead ETL & Data Engineering (Fandango) Location: Pune / Hybrid Experience: 8 to 12 Years Employment Type: Full-Time Job Summary We are seeking a highly skilled and experienced Technology Lead with 10+ years of expertise in ETL, Data Warehousing, and AWS cloud services. The ideal candidate should have strong hands-on experience with Talend (Data Integration, Big Data, Admin), AWS Glue, PySpark, Airflow (MWAA preferred), and AWS Redshift. You will play a critical role in designing, developing, and leading data integration pipelines, ensuring scalability, performance, and business impact. Key Responsibilities Lead the design, development, and deployment of end-to-end ETL and Data War...
Posted 1 month ago
1.0 - 3.0 years
0 Lacs
india
On-site
Description Fulfillment by Amazon (FBA) enables sellers to scale their businesses globally by leveraging Amazon's world-class fulfillment network. Sellers using FBA benefit from fast, reliable shipping, Prime delivery eligibility, and hassle-free returns-allowing them to focus on growth while we handle operations. The WW FBA Central Analytics team builds and operates scalable, enterprise-grade data infrastructure, tools, and analytics solutions that power WW FBA business. We partner across global product, program, and operations teams to unify diverse datasets, deliver self-service analytics, and develop next-generation capabilities using LLMs to unlock insights. Our charter includes buildin...
Posted 1 month ago
1.0 - 3.0 years
0 Lacs
bengaluru, karnataka, india
On-site
This job is with Amazon, an inclusive employer and a member of myGwork the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Description Fulfillment by Amazon (FBA) enables sellers to scale their businesses globally by leveraging Amazon&aposs world-class fulfillment network. Sellers using FBA benefit from fast, reliable shipping, Prime delivery eligibility, and hassle-free returns-allowing them to focus on growth while we handle operations. The WW FBA Central Analytics team builds and operates scalable, enterprise-grade data infrastructure, tools, and analytics solutions that power WW FBA business. We partner across global product, progr...
Posted 1 month ago
2.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Experian is a global data and technology company that powers opportunities for individuals and businesses worldwide. Our unique combination of data, analytics, and software allows us to redefine lending practices, prevent fraud, simplify healthcare, create marketing solutions, and provide deeper insights into the automotive market. With a presence in various markets such as financial services, healthcare, automotive, agribusiness, and insurance, we assist millions of individuals in achieving their financial goals while saving them time and money. As part of our commitment to unlocking the power of data, we are looking for a skilled professional to join our team as a Business Intelligence Dev...
Posted 2 months ago
8.0 - 13.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 4 months ago
8.0 - 13.0 years
6 - 11 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 4 months ago
8.0 - 13.0 years
6 - 11 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 4 months ago
7.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Develop client-facing standardized reports with Business Intelligence tools such as Tableau. Perform data profiling on source data with minimal documentation. Independently troubleshoot data, perform detail data analyses, and develop complex SQL code. Write secure, stable, testable, and maintainable Python code with minimal defects Perform root cause analysis, propose solutions, and take ownership of the next steps for their resolution Create and maintain report specifications and process documentations as part of the required data deliverables. Writes queries to pull/summarize/analyze data from various data sets and platforms Collaborate with Engineering teams to discover an...
Posted 4 months ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad, Bengaluru
Work from Office
About our team DEX is a central data org for Kotak Bank which manages entire data experience of Kotak Bank. DEX stands for Kotaks Data Exchange. This org comprises of Data Platform, Data Engineering and Data Governance charter. The org sits closely with Analytics org. DEX is primarily working on greenfield project to revamp entire data platform which is on premise solutions to scalable AWS cloud-based platform. The team is being built ground up which provides great opportunities to technology fellows to build things from scratch and build one of the best-in-class data lake house solutions. The primary skills this team should encompass are Software development skills preferably Python for pla...
Posted 4 months ago
5.0 - 10.0 years
12 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Job Summary: AWS Developer (Offshore Role) Role Overview: Responsible for migrating and transforming data pipelines from legacy Cloudera/Hadoop systems to AWS-native solutions, using tools like PySpark, MWAA, and EMR. Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Kn...
Posted 4 months ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
123151 Jobs | Dublin
Wipro
40198 Jobs | Bengaluru
EY
32154 Jobs | London
Accenture in India
29674 Jobs | Dublin 2
Uplers
24333 Jobs | Ahmedabad
Turing
22774 Jobs | San Francisco
IBM
19350 Jobs | Armonk
Amazon.com
18945 Jobs |
Accenture services Pvt Ltd
18931 Jobs |
Capgemini
18788 Jobs | Paris,France