Jobs
Interviews

147 Apache Airflow Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Duration: 8Months Job Type: Contract Work Type: Onsite The top 3 Responsibilities: Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc. Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Leadership Principles: Ownership, Customer obsession, Dive Deep and Deliver results Mandatory requirements: 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL & SQL Tuning Basic to Mid-level proficiency in scripting with Python Education or Certification Requirements: Any Graduation

Posted 2 months ago

Apply

8.0 - 12.0 years

13 - 20 Lacs

Chennai

Work from Office

We are looking for an experienced Python ETL Developer to design, develop, and optimize data pipelines. The ideal candidate should have expertise in Python, PySpark, Airflow, and data processing frameworks, along with the ability to work independently and communicate effectively in English. Roles & Responsibilities : - Develop and maintain ETL pipelines using Python, NumPy, Pandas, PySpark, and Apache Airflow. - Work with large-scale data processing and transformation workflows. - Optimize and enhance ETL performance and scalability. - Collaborate with data engineers and business teams to ensure efficient data flow. - Troubleshoot and debug ETL-related issues to ensure data integrity and reliability. Qualifications & Skills : - 8+ years of Python experience, with 5+ years dedicated to Python ETL development. - Proficiency in PySpark, Apache Airflow, NumPy, and Pandas. - Experience in working with SQLAlchemy and FastAPI (added advantage). - Strong problem-solving skills and the ability to work independently. - Good English communication skills to collaborate with global teams. Preferred Qualifications : - Experience in cloud-based ETL solutions (AWS, GCP, Azure). - Knowledge of big data technologies like Hadoop, Spark, or Kafka.

Posted 2 months ago

Apply

10.0 - 15.0 years

16 - 25 Lacs

Bengaluru

Hybrid

Experience : 10+ Years Location : Bangalore, Hybrid Notice Period : Immediate Joiners to 30 days Mode of Interview : 2-3 rounds Key Skills : Snowflake or Databricks, Python or Java, Cloud, Data Modeling Primary Skills : - Erwin tool - Data Modeling (Logical, Physical) - 2+ years - Snowflake - 2+ years - Warehousing Concepts - Any cloud Secondary Skills : - dbt cloud - Airflow - AWS Job Description : This is a hands-on technology position for a data technology leader with specialized business knowledge in the middle/front office areas. The candidate is someone with a proven record of technology project execution for data on the cloud, able to get hands-on when it comes to analysis, design, and development, and has creativity and self-motivation to deliver on mission-critical projects. These skills will help you succeed in this role : - Having 10+ Years of experience in an application development team with hands-on architecting, designing, developing, and deployment skillset. - Have demonstrated ability to translate business requirements in a technical design and through to implementation. - Experienced Subject Matter Expert in designing & architecting Big Data platforms services, and systems using Java/Python, SQL, Databricks, Snowflake, and cloud-native tools on Azure and AWS. - Experience in event-driven architectures, message hub, MQ, Kafka. - Experience in Kubernetes, ETL tools, Data as a Service, Star Schema, Dimension modeling, OLTP, ACID, and data structures is desired. - Proven Experience with Cloud and Big Data platforms, building data processing applications utilizing Spark, Airflow, Object storage, etc. - Ability to work in an on-shore/off-shore model working with development teams across continents. - Use coding standards, secured application development, documentation, Release and configuration management, and expertise in CI/CD. - Well-versed in SDLC using Agile Scrum. - Plan and execute the deployment of releases. - Ability to work with Application Development, SQA, and Infrastructure team. - Strong leadership skills, analytical problem-solving skills along with the ability to learn and adapt quickly. - Self-motivated, quick learner, and creative problem solver, organized, and responsible for managing a team of dev engineers. Education & Preferred Qualifications : - Bachelor's degree and 6 or more years of experience in Information Technology. - Strong team ethics and team player. - Cloud certification, Databricks, or Snowflake Certification is a plus. - Experience in evaluating software estimating cost and delivery timelines and managing financials. - Experience leading agile delivery & adhering to SDLC processes is required. - Work closely with the business & IT stakeholders to manage delivery. Additional Requirements : - Ability to lead delivery, manage team members if required, and provide feedback. - Ability to make effective decisions and manage change. - Communicates effectively in a professional manner both written and orally. - Team player with a positive attitude, enthusiasm, initiative, and self-motivation.

Posted 2 months ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Patna

Work from Office

Role : Data Engineer We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations.Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 2 months ago

Apply

5 - 10 years

25 - 30 Lacs

Bengaluru

Hybrid

Job Title: Senior Data Engineer Overview: We are seeking a highly skilled and experienced Senior Data Engineer to join our data team. This role is pivotal in designing, building, and maintaining robust data infrastructure to support analytics, reporting, and machine learning initiatives. The ideal candidate will have strong technical expertise in data engineering tools and best practices, and a passion for transforming raw data into actionable insights. Responsibilities: Design, develop, and optimize scalable data pipelines to process large volumes of structured and unstructured data. Build and maintain efficient ETL (Extract, Transform, Load) workflows and automate data integration from various sources. Manage and optimize data warehousing solutions to ensure high availability and performance. Collaborate with machine learning engineers to deploy and maintain ML models in production environments. Ensure high standards of data quality, governance, and consistency across all data systems. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and enhance data models. Monitor, troubleshoot, and improve data infrastructure for reliability and scalability. Qualifications: Proven experience in data engineering with a strong grasp of SQL, Python, and Apache Spark. Hands-on experience in designing and implementing ETL pipelines and data models. Proficient with cloud platforms such as AWS, Google Cloud Platform (GCP), or Microsoft Azure. Deep understanding of big data technologies, including Hadoop, Kafka, Hive, and related tools. Strong problem-solving skills and the ability to work collaboratively in a team-oriented environment. Bachelors or Master’s degree in Computer Science, Engineering, or a related technical field. Preferred Qualifications (Optional): Experience with orchestration tools such as Apache Airflow or Prefect. Familiarity with containerization (Docker, Kubernetes). Knowledge of data security and compliance standards (e.g., GDPR, HIPAA).

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Pune

Work from Office

Role : Data Engineer We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Lucknow

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Bengaluru

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Mumbai

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Surat

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Kanpur

Work from Office

Role : Data EngineerWe are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Hyderabad

Work from Office

Role : Data EngineerWe are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills.Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Nagpur

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Chennai

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Ahmedabad

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Kolkata

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 2 months ago

Apply

7 - 10 years

8 - 14 Lacs

Jaipur

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 2 months ago

Apply

3 - 5 years

25 - 35 Lacs

Bengaluru

Remote

Data Engineer Experience: 3 - 5 Years Exp Salary : Upto INR 35 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:30AM to 7:30PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Spark, AWS, Kafka, SQL Good to have skills : Apache Hudi, Flink, Iceberg, Azure, GCP Nomupay (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: At Nomupay, we're all about making global payments simple. Since 2021, weve been on a mission to remove complexity and help businesses expand without limits. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 2 months ago

Apply

4 - 7 years

6 - 9 Lacs

Noida

Work from Office

Role Objective: We are seeking a Software Data Engineer with 4-7 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Qualifications : Deep knowledge and experience working with Python/Scala and Apache Spark Experienced in Azure data factory, Azure Data bricks, Azure Blob Storage, Azure Data Lake, Delta lake. Experienced in orchestration tool Apache Airflow . Experience working with SQL and NoSQL database systems such as MongoDB, Apache Parquet Experience with Azure cloud environments Experience with acquiring and preparing data from primary and secondary disparate data sources Experience working on large scale data product implementation. Experience working with agile methodology preferred. Healthcare industry experience preferred. Responsibilities : Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems.

Posted 2 months ago

Apply

3 - 6 years

6 - 15 Lacs

Hyderabad

Work from Office

Dear Tech Aspirants, Greetings of the day, We are conducting a walk-in drive for AWS Data Engineers(3-6 Years) on :- Walk-In Date: Saturday, 17-May-2025 Time : 9:30 AM - 3:30 PM Kindly fill this form to confirm you presence - https://forms.office.com/r/CEcuYGsFPS Walk-In Venue: Ground Floor, Sri Sai Towers, Plot No. 91A & 91B, Vittal Rao Nagar, Madhapur, Hyderabad 500081. Gogle Map Location : https://maps.app.goo.gl/dKkAm4EgF1q1CKqc8 *Carry your updated CV* Job Title: AWS Data Engineer (SQL Mandatory) Location: Hyderabad, India Experience: 3 to 6 years We are seeking a skilled AWS Data Engineer with 3 to 6 years of experience to join our team. The ideal candidate will be responsible for implementing and maintaining AWS data services, processing and transforming raw data, and optimizing data workflows using AWS Glue to ensure seamless integration with business processes. This role requires a deep understanding of AWS cloud technologies, Apache Spark, and data engineering best practices. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, scalable, and efficient. Roles & Responsibilities : Implement & Maintain AWS Data Services : Deploy, configure, and manage AWS Glue and associated workflows. Data Processing & Transformation : Clean, process, and transform raw data into structured and usable formats for analytics and machine learning. Develop ETL Pipelines : Design and build ETL/ELT pipelines using Apache Spark, AWS Glue, and AWS Data Pipeline. Data Governance & Security : Ensure data quality, integrity, and security in compliance with organizational standards. Performance Optimization : Continuously improve data processing pipelines for efficiency and scalability. Collaboration : Work closely with data scientists, analysts, and software engineers to enable seamless data accessibility. Documentation & Best Practices : Maintain technical documentation and enforce best practices in data engineering. Modern Data Transformation : Develop and manage data transformation workflows using dbt (Data Build Tool) to ensure modular, testable, and version-controlled data pipelines. Data Mesh & Governance : Contribute to the implementation of data mesh architecture to promote domain-oriented data ownership, decentralized data management, and enhanced data governance. Workflow Orchestration : Design and implement data orchestration pipelines using tools like Apache Airflow for managing complex workflows and dependencies. Good hands-on experience with SQL programming. Technical Requirements & Skills : Proficiency in AWS: Strong hands-on experience with AWS cloud services, including Amazon S3, AWS Glue, Amazon Redshift, and Amazon RDS. Expertise in AWS Glue: Deep understanding of AWS Glue, Apache Spark, and AWS Lake Formation. Programming Skills: Proficiency in Python, Scala for data engineering and processing. SQL Expertise: Strong knowledge of SQL for querying and managing structured data. ETL & Data Pipelines: Experience in designing and maintaining ETL/ELT workflows. Big Data Technologies: Knowledge of Hadoop, Spark, and distributed computing frameworks. Orchestration Tools : Experience with Apache Airflow or similar tools for scheduling and monitoring data workflows. Data Transformation Frameworks : Familiarity with DBT (Data Build Tool) for building reliable, version-controlled data transformations. Data Mesh Concepts : Understanding of data mesh architecture and its role in scaling data across decentralized domains. Version Control & CI/CD: Experience with Git, AWS CodeCommit, and CI/CD pipelines for automated data deployment. Nice to Have : AWS Certified Data Analytics Specialty. Machine Learning Familiarity: Understanding of machine learning concepts and integration with AWS SageMaker. Streaming Data Processing: Experience with Amazon Kinesis or Spark Streaming. Qualifications : Bachelors or Masters degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data engineering, cloud technologies, and AWS Glue. Strong problem-solving skills and ability to work in fast-paced environment. If interested please appear and Come with your updated CV*

Posted 2 months ago

Apply

7 - 9 years

10 - 15 Lacs

Mumbai

Work from Office

We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies