15348 Airflow Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

6 - 10 Lacs

bengaluru

Work from Office

JR REQ--- GCP Data Engineer --4to8year---Bangalore, Hyder----Chahat Parveen ---TCS C2H ---900000

Posted 1 week ago

AI Match Score
Apply

180.0 years

0 Lacs

pune, maharashtra, india

On-site

About Springer Nature Group Springer Nature opens the doors to discovery for researchers, educators, clinicians and other professionals. Every day, around the globe, our imprints, books, journals, platforms and technology solutions reach millions of people. For over 180 years our brands and imprints have been a trusted source of knowledge to these communities and today, more than ever, we see it as our responsibility to ensure that fundamental knowledge can be found, verified, understood and used by our communities – enabling them to improve outcomes, make progress, and benefit the generations that follow. Visit group.springernature.com and follow @SpringerNature / @SpringerNatureGroup Job T...

Posted 1 week ago

AI Match Score
Apply

5.0 - 9.0 years

3 - 7 Lacs

thane

Work from Office

Description **Tableau Admin** Job OverviewThe Tableau Server/Site administrator monitors overall server health including server usage patterns, process status (up/down/failover), job status (success/failure), disk drive space, stale content, license provisioning, Tableau Bridge activity and space usage. You will interact with technologies such as Snowflake, Oracle, SQL Database, Airflow, Python, and AWS to ensure Tableau runs smoothly and efficiently. **Key Responsibilities:** Monitor Tableau Server infrastructure and resource utilization (processor, memory, disk) or Tableau Bridge pool availability and activity. Monitor Tableau Cloud application-level metrics, measure content metrics in Sta...

Posted 1 week ago

AI Match Score
Apply

2.0 years

0 Lacs

mumbai metropolitan region

On-site

About Adsremedy At Adsremedy , we are revolutionizing the digital advertising landscape with cutting-edge AdTech solutions . We offer services like Programmatic Advertising , Campaign Optimization , Advanced Analytics , Cross-Platform Campaigns , and specialize in In App , CTV and DOOH ads . Our mission is to empower businesses to connect with audiences and drive measurable results. Join us to be part of a fast-growing company that’s transforming how brands engage in the digital world. Job Overview We’re looking for a talented Data Engineer to join our dynamic team. In this role, you'll design and maintain scalable, high-performance ETL pipelines, work with large-scale data processing framew...

Posted 1 week ago

AI Match Score
Apply

2.0 - 6.0 years

5 - 9 Lacs

uttar pradesh

Work from Office

Skill : Big Data Need to Have : Python, Pyspark, Trino, Hive Good to Have : Snowflake, SQL, Airflow, Openshift, Kubernentes Location : Hyderabad, Pune, Bangalore Job Description : Develop and maintain data pipelines, ELT processes, and workflow orchestration using Apache Airflow, Python and PySpark to ensure the efficient and reliable delivery of data. Design and implement custom connectors to facilitate the ingestion of diverse data sources into our platform, including structured and unstructured data from various document formats . Collaborate closely with cross functional teams to gather requirements, understand data needs, and translate them into technical solutions. Implement DataOps pr...

Posted 1 week ago

AI Match Score
Apply

2.0 - 5.0 years

3 - 7 Lacs

lakshadweep, chandigarh

Work from Office

Data Engineer Skills Required: Strong proficiency in ADF, Snowflake and SQL(All 3 are mandatory) Experience Required: Minimum 5 years of relevant experience Location: Available across all UST locations Notice Period: Immediate joiners (Candidates available to join by 31st January 2025 ) SO - 48778470 Roles Open: 5 positions available Budget - 25LPA to 28 LPA We are looking for profiles that meet the above requirements. Kindly share profiles of suitable candidates at your earliest convenience. For any urgent queries or additional support, feel free to reach out to Vidyalakshmi Murali(UST,IN)- 9605562549 directly. JD FYR: We are seeking a highly skilled Data Engineer to join our team. The idea...

Posted 1 week ago

AI Match Score
Apply

1.0 - 5.0 years

6 - 10 Lacs

pune

Work from Office

Job Title: Engineer, Associate Location: Pune, India Role Description An Engineer is responsible for designing, developing and support a strategic US Finance Regulatory Reporting platform for the Bank. Join us and you will be on the front lines of transformation for this domain - leveraging cutting edge open-source technologies and highly collaborative agile processes to deliver valuable business solutions that you can be proud of. You will share your passion for technical excellence and engage the full depth of your engineering skills in a diverse cross functional team that partners with business and technology experts globally to develop innovative solutions to complex business problems. Y...

Posted 1 week ago

AI Match Score
Apply

3.0 - 5.0 years

5 - 10 Lacs

chennai

Work from Office

About the Role: We are looking for a skilled Snowflake Developer to join our data engineering team. The ideal candidate will have hands-on experience in designing, developing, and optimizing data pipelines and data warehousing solutions using Snowflake and related technologies. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes using Snowflake, PySpark, and Airflow. Integrate and manage data from various sources using AWS services. Work with DataHub and Great Expectations to ensure data quality, observability, and governance. Optimize performance and ensure the reliability of data pipelines and workflows. Collaborate with data analysts, architects, ...

Posted 1 week ago

AI Match Score
Apply

2.0 - 5.0 years

3 - 6 Lacs

karnataka

Work from Office

SQL Budget -13$ /Hour (18 LPA) Location - Bangalore 3-5 years Primary skill is SQL we are looking for someone with expertise in SQL, advanced SQLbasic working knowledge of Python, Shell scriptinggood to have Airflow and MDM conceptsNeed to work from Bangalore office of customer few days from customer and few days remotely.Good knowledge on SQLbasic knowledge on Python, Shellgood to have Airflow and MDM concepts

Posted 1 week ago

AI Match Score
Apply

5.0 - 10.0 years

18 - 25 Lacs

kochi

Work from Office

We are looking for a skilled Data Engineer with a strong background in PostgreSQL, SQL, and Python to join our dynamic team in Kochi. The ideal candidate will be responsible for building and optimising data pipelines, ensuring seamless data flow across systems, and supporting our analytics and product teams with reliable data solutions. Job Title: Data Engineer Company: Pumex Infotech Pvt. Ltd. Experience: 5 to 10 Years Educational qualification: B Tech/ M Tech/ BCA/MCA (Computer Science) or related field. Location : InfoPark, Kochi Notice Period: Immediate Joiners Responsibilities: Design, develop, and maintain robust ETL/ELT data pipelines using tools such as Airflow or dbt . Optimize Post...

Posted 1 week ago

AI Match Score
Apply

10.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Specification: Technical Lead Team: Investment Risk & Research (Engineering Solutions) Location: Hyderabad & Chennai Role Overview The Technical Lead will oversee architecture and design for the Risk & Research Technology organization, facilitating migration to cloud-native technologies and integration with enterprise platforms such as Aladdin . The role ensures consistent engineering practices across squads, accelerates shared component development, and mentors technical teams across the offshore organization. Responsibilities • Lead architecture and design for cloud-native migration of Risk & Research applications. • Collaborate with horizontal enterprise engineering teams to define an...

Posted 1 week ago

AI Match Score
Apply

2.0 - 5.0 years

3 - 7 Lacs

karnataka

Work from Office

EXP 4 to 6 yrs Location Any PSL Location Rate below 14$ JD - DBT/AWS Glue/Python/Pyspark Hands-on experience in data engineering, with expertise in DBT/AWS Glue/Python/Pyspark. Strong knowledge of data engineering concepts, data pipelines, ETL/ELT processes, and cloud data environments (AWS) Technology DBT, AWS Glue, Athena, SQL, Spark, PySpark Good understanding of Spark internals and how it works. Goot skills in PySpark Good understanding of DBT basically should be to understand DBT limitations and when it will end-up in model explosion Good hands-on experience in AWS Glue AWS expertise should know different services and should know how to configure them and infra-as-code experience Basic ...

Posted 1 week ago

AI Match Score
Apply

10.0 - 15.0 years

12 - 17 Lacs

pune

Work from Office

R&D 60 Architect: Count : 1 position Total Experience : 10+ years, Relevant Experience: 3+ years Technology : Full stack, polyglot professionals, experience with innovative product development, research, tool selection, integrations, and hands-on development. Mandatory Skillset : 10+ years of hands-on experience in design, development and implementation of scalable cloud-based software. Have strong software architectural, technical design and programming skills. Experience in Application Security, Scalability and Performance. Knowledge of and skilled in various tools, languages, frameworks and cloud technologies with the ability to be hands-on where needed. Expertise on Kubernetes and Docker...

Posted 1 week ago

AI Match Score
Apply

5.0 - 10.0 years

27 - 30 Lacs

chandigarh, dadra & nagar haveli, jammu

Work from Office

Availability-Immediate-1 week Overlap: 4 hours with PST(Pacific Standard Time) Candidate proficiency level: Senior(8+) Requirements: Bachelor's degree or higher in Computer Science, Engineering, or a related field. 5+ years of experience in data engineering, with a strong focus on designing and building data pipelines and infrastructure. Proficient in SQL and Python, with the ability to translate complexity into efficient code. Experience with data workflow development and management tools (dbt, Airflow). Solid understanding of distributed computing principles and experience with cloud-based data platforms such as AWS, GCP, or Azure. Strong analytical and problem-solving skills, with the abi...

Posted 1 week ago

AI Match Score
Apply

4.0 - 9.0 years

4 - 8 Lacs

bengaluru

Work from Office

Summary Within this role, you would be a hands-on experience in data engineering functions including schema design, data movement, data transformation, encryption, and monitoring: all the activities needed to build, sustain, and govern big data pipelines. Responsibilities Own development of large-scale data platform including operational data store, real time metrics store and attribution platform, data warehouses and data marts for advertising planning, operation, reporting and optimization Wider team collaboration and system documentation Maintain next-gen cloud based big data infrastructure for batch and streaming data applications, and continuously improve performance, scalability and av...

Posted 1 week ago

AI Match Score
Apply

3.0 - 7.0 years

5 - 9 Lacs

pune

Work from Office

JD GCP Data Engineer Responsibilities: Design, develop, and maintain data pipelines using GCP services such as Dataflow, BigQuery, and Pub/Sub. Implement and manage data warehouse solutions using BigQuery, ensuring data is stored securely and efficiently. Use Pub/Sub for real time data ingestion and streaming analytics. Provision and manage GCP infrastructure using Terraform, ensuring best practices in IaC are followed. Optimize data storage and retrieval processes to enhance performance and reduce costs. Monitor and troubleshoot data pipeline issues, ensuring high availability and reliability of data services. Ensure data quality and integrity through robust testing and validation processes...

Posted 1 week ago

AI Match Score
Apply

3.0 - 5.0 years

3 - 7 Lacs

bengaluru

Work from Office

Azure Databricks: 1. Mounting Files 2. PAT Tokens 3. Cluster Auto scaling 4. Cluster Types 5. Nesting notebooks 6. Reading file using default and custom schema 7. Types Secrets and scopes in databricks 8. Delta Tables 9. Basic file loading scenario with its steps 10. Optimization 11. User defined functions 12. Transformations ADF: 1. Integration runtimes familiar with and where they are used 2. Components in creating pipelines 3. Scenario based question for creating a basic pipeline for loading data to Azure SQL table 4. Basic steps to improve the loading activity of a huge amount of file , ex: 1TB file size

Posted 1 week ago

AI Match Score
Apply

5.0 - 8.0 years

3 - 9 Lacs

gurgaon

On-site

ROLES & RESPONSIBILITIES Key Responsibilities Analyze existing Hadoop, Pig, and Spark scripts from Dataproc and refactor them into Databricks-native PySpark. Implement data ingestion and transformation pipelines using Delta Lake best practices. Apply conversion rules and templates for automated code migration and testing. Conduct data validation between legacy and migrated environments (schema, count, and data-level checks). Collaborate on developing AI-driven tools for code conversion, dependency extraction, and error remediation. Ensure best practices for code versioning, error handling, and performance optimization. Participate in UAT, troubleshooting, and post-migration validation activi...

Posted 1 week ago

AI Match Score
Apply

10.0 - 14.0 years

3 - 9 Lacs

gurgaon

On-site

ROLES & RESPONSIBILITIES Key Responsibilities Lead design and execution of Dataproc ? Databricks PySpark migration roadmap. Define modernization strategy , including data ingestion, transformation, orchestration, and governance. Architect scalable Delta Lake and Unity Catalog –based solutions. Manage and guide teams on code conversion, dependency mapping, and data validation. Collaborate with platform, infra, and DevOps teams to optimize compute costs and performance. Own the automation & GenAI acceleration layer , integrating code parsers, lineage tools, and validation utilities. Conduct performance benchmarking, cost optimization, and platform tuning (Photon, Auto-scaling, Delta Caching). ...

Posted 1 week ago

AI Match Score
Apply

0 years

0 Lacs

gurgaon

On-site

Strong proficiency in Python and FastAPI. Experience with MLOps tools and ML experimentation platforms such as MLflow (Runs & Experiments) Expertise in model versioning and lifecycle management (model training) Knowledge of ML frameworks/libraries (e.g., TensorFlow, PyTorch) and orchestration tools such as MLflow, Ray.io , Kubeflow, and Airflow Familiarity with Docker, Kubernetes, and microservice-based architectures Experience with tools such as MLflow, LakeFS, MinIO, and NATS for asynchronous communication Knowledge of database design and best practices Understanding of RBAC and authentication mechanisms

Posted 1 week ago

AI Match Score
Apply

5.0 years

7 - 9 Lacs

gurgaon

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development...

Posted 1 week ago

AI Match Score
Apply

5.0 years

5 - 6 Lacs

hyderābād

On-site

JOB DESCRIPTION You enjoy shaping the future of product innovation as a core leader, driving value for customers, guiding successful product launches, and exceeding expectations. Join our dynamic team and make a meaningful impact by delivering high-quality products that resonate with clients. As a Product Manager in the Consumer and Community Banking - Data Publishing and Processing (DPP) team, you are an integral part of the group that innovates in building data publishing and processing products and leads the end-to-end product life cycle. As a core leader, you are responsible for acting as the voice of the customer and developing DPP products that provide customer value. Utilizing your de...

Posted 1 week ago

AI Match Score
Apply

0 years

0 Lacs

bengaluru, karnataka, india

On-site

Join our Team About this opportunity: Ericsson Enterprise Wireless Solutions (BEWS) is responsible for driving Ericsson’s Enterprise Networking and Security business. Our expanding product portfolio covers wide area networks, local area networks, and enterprise security. We are the #1 global market leader in Wireless-WAN enterprise connectivity and are rapidly growing in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions. What Will You Do: Design, implement, and maintain data pipelines and ETL workflows for structured and unstructured data using scalable frameworks Build and automate ML pipelines for model training, deployment, monitoring, and retraining using ML...

Posted 1 week ago

AI Match Score
Apply

4.0 - 6.0 years

0 Lacs

hyderābād

On-site

Overview: In this role, you will play a key role in automating and streamlining the data processing pipeline for AMESA markets. Responsibilities: Own data pipeline development end-to-end, spanning data modeling, testing, scalability, operability and ongoing metrics. Ensure that we build high quality software by reviewing peer code check-ins Define best practices for product development, engineering, and coding as part of a world class engineering team in Hyderabad Focus on delivering high quality data pipelines and tools through careful analysis of system capabilities and feature requests, peer reviews, test automation, and collaboration with QA engineers Develop software in short iterations...

Posted 1 week ago

AI Match Score
Apply

1.0 - 3.0 years

10 - 15 Lacs

noida

Work from Office

Roles and Responsibilities : Design, develop, test, and deploy large-scale data pipelines using Airflow to extract, transform, and load (ETL) data from various sources into AWS cloud storage solutions. Collaborate with cross-functional teams to identify business requirements and design scalable data architectures that meet those needs. Develop high-quality code in Scala or Java to implement ETL processes and integrate them with Kafka messaging systems for real-time event processing. Troubleshoot complex issues related to pipeline failures and optimize system performance for improved efficiency. Job Requirements : 1-3 years of experience in Data Engineering with expertise in Airflow, Scala/Ja...

Posted 1 week ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies