Jobs
Interviews

340 Apache Airflow Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Senior Software Engineer Location: Bengaluru As a member of this team, the data engineer will be responsible for designing and expanding our existing data infrastructure, enabling easy access to data, supporting complex data analyses, and automating optimization workflows for business and marketing operations Essential Responsibilities: As a Senior Software Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as Python, Spark, Airflow, Snowflake, Hive, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in on-call rotation in their respective time zones (be available by phone or email in case something goes wrong) Desired Characteristics: Minimum 8 years of software engineering experience. An undergraduate degree in Computer Science (or a related field) from a university where the primary language of instruction is English is strongly desired. 2+ Years of Experience/Fluency in Python Proficient with relational databases and Advanced SQL Expert in usage of services like Spark and Hive. Experience working with container-based solutions is a plus. Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and on-premises environment. Exposure to the whole software development lifecycle from inception to production and monitoring. Experience in Advertising Attribution domain is a plus Experience in agile software development processes Excellent interpersonal and communication skills

Posted 3 months ago

Apply

2.0 - 4.0 years

4 - 6 Lacs

Hyderabad

Work from Office

The CDP ETL & Database Engineer will specialize in architecting, designing, and implementing solutions that are sustainable and scalable. The ideal candidate will understand CRM methodologies, with an analytical mindset, and a background in relational modeling in a Hybrid architecture. The candidate will help drive the business towards specific technical initiatives and will work closely with the Solutions Management, Delivery, and Product Engineering teams. The candidate will join a team of developers across the US, India & Costa Rica. Responsibilities : ETL Development The CDP ETL & Database Engineer will be responsible for building pipelines to feed downstream data processes. They will be able to analyze data, interpret business requirements, and establish relationships between data sets. The ideal candidate will be familiar with different encoding formats and file layouts such as JSON and XML. I mplementations & Onboarding Will work with the team to onboard new clients onto the ZMP/CDP+ platform. The candidate will solidify business requirements, perform ETL file validation, establish users, perform complex aggregations, and syndicate data across platforms. The hands-on engineer will take a test-driven approach towards development and will be able to document processes and workflows. Incremental Change Requests The CDP ETL & Database Engineer will be responsible for analyzing change requests and determining the best approach towards implementation and execution of the request. This requires the engineer to have a deep understanding of the platform's overall architecture. Change requests will be implemented and tested in a development environment to ensure their introduction will not negatively impact downstream processes. Change Data Management The candidate will adhere to change data management procedures and actively participate in CAB meetings where change requests will be presented and approved. Prior to introducing change, the engineer will ensure that processes are running in a development environment. The engineer will be asked to do peer-to-peer code reviews and solution reviews before production code deployment. Collaboration & Process Improvement The engineer will be asked to participate in knowledge share sessions where they will engage with peers, discuss solutions, best practices, overall approach, and process. The candidate will be able to look for opportunities to streamline processes with an eye towards building a repeatable model to reduce implementation duration. Job Requirements : The CDP ETL & Database Engineer will be well versed in the following areas: Relational data modeling ETL and FTP concepts Advanced Analytics using SQL Functions Cloud technologies - AWS, Snowflake Able to decipher requirements, provide recommendations, and implement solutions within predefined timeframes. The ability to work independently, but at the same time, the individual will be called upon to contribute in a team setting. The engineer will be able to confidently communicate status, raise exceptions, and voice concerns to their direct manager. Participate in internal client project status meetings with the Solution/Delivery management teams. When required, collaborate with the Business Solutions Analyst (BSA) to solidify requirements. Ability to work in a fast paced, agile environment; the individual will be able to work with a sense of urgency when escalated issues arise. Strong communication and interpersonal skills, ability to multitask and prioritize workload based on client demand. Familiarity with Jira for workflow mgmt., and time allocation. Familiarity with Scrum framework, backlog, planning, sprints, story points, retrospectives etc. Required Skills : ETL ETL tools such as Talend (Preferred, not required) DMExpress Nice to have Informatica Nice to have Database - Hands on experience with the following database Technologies Snowflake (Required) MYSQL/PostgreSQL Nice to have Familiar with NOSQL DB methodologies (Nice to have) Programming Languages Can demonstrate knowledge of any of the following. PLSQL JavaScript Strong Plus Python - Strong Plus Scala - Nice to have AWS Knowledge of the following AWS services: S3 EMR (Concepts) EC2 (Concepts) Systems Manager / Parameter Store Understands JSON Data structures, key value pair. Working knowledge of Code Repositories such as GIT, Win CVS, SVN. Workflow management tools such as Apache Airflow, Kafka, Automic/Appworx Jira Minimum Qualifications Bachelor's degree or equivalent 2-4 Years' experience Excellent verbal & written communications skills Self-Starter, highly motivated Analytical mindset.

Posted 3 months ago

Apply

4.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Experience: 5+years Location: Bengaluru Role Overview We are looking for a Senior Data Engineer who will play akey role in designing, building, and maintaining data ingestion frameworks andscalable data pipelines. The ideal candidate should have strong expertise inplatform architecture, data modeling, and cloud-based data solutions to supportreal-time and batch processing needs. What you'll be doing: Design, develop, and optimise DBT models to support scalable data transformations Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance. What you'll have: 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch) Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform) Ability to effectively communicate timelines and deliver MVPs set for the sprint Strong analytical and problem-solving skills, with the ability to work across cross-functional teams. Nice to have Experience in hardware manufacturing data processing Contributions to open-source data engineering tools Knowledge of Tableau or other BI tools for data visualization Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Are you a seasoned data engineer with a passion for hands-on technical work? Do you thrive in an environment that values innovation, collaboration, and cutting-edge technologies? We are looking for a seasoned Integration Engineer to join our team, someone who is passionate about building and maintaining scalable data pipelines and integrations. The ideal candidate will have a strong foundation in Python programming, experience with Snowflake for data warehousing, proficiency in AWS and Kubernetes (EKS) for cloud services management, and expertise in CI/CD practices, Apache Airflow, DBT, and API development. This role is critical to enhancing our data integration capabilities and supporting our data-driven initiatives. Role and Responsibilities: As the Technical Data Integration Engineer, you will play a pivotal role in shaping the future of our data integration engineering initiatives. You will be part of talented data integration engineers while remaining actively involved in the technical aspects of the projects. Your responsibilities will include: Hands-On Contribution: Continue to be hands-on with data integration engineering tasks, including data pipeline development, EL processes, and data integration. Be the go-to expert for complex technical challenges. Integrations Architecture: Design and implement scalable and efficient data integration architectures that meet business requirements. Ensure data integrity, quality, scalability, and security throughout the pipeline. Tool Proficiency: Leverage your expertise in Snowflake, SQL, Apache Airflow, AWS, API, and Python to architect, develop, and optimize data solutions. Stay current with emerging technologies and industry best practices. Data Quality: Monitor data quality and integrity, implementing data governance policies as needed. Cross-Functional Collaboration: Collaborate with data science, data warehousing, analytics, and other cross-functional teams to understand data requirements and deliver actionable insights. Performance Optimization :Identify and address performance bottlenecks within the data infrastructure. Optimize data pipelines for speed, reliability, and efficiency. Qualifications Minimum Bachelor's degree in Computer Science, Engineering, or related field. Advanced degree is a plus. 5 years of hands-on experience in data engineering. Familiarity with cloud platforms, such as AWS or Azure. Expertise in Apache Airflow, Snowflake, SQL, Python, Shell scripting, API gateways, web services setup. Strong experience in full-stack development, AWS, Linux administration, data lake construction, data quality assurance, and integration metrics. Excellent analytical, problem-solving, and decision-making abilities. Strong communication skills, with the ability to articulate technical concepts to non-technical stakeholders. A collaborative mindset, with a focus on team success. If you are a results-oriented Data Integration Engineer with a strong background in Apache Airflow, Snowflake, SQL, Python and API, we encourage you to apply. Join us in building data solutions that drive business success and innovation

Posted 3 months ago

Apply

5.0 - 7.0 years

30 - 40 Lacs

Bengaluru

Hybrid

Senior Software Developer (Python) Experience: 5 - 7 Years Exp Salary : Upto USD 40,000 / year Preferred Notice Period : Within 60 Days Shift : 11:00AM to 8:00PM IST Opportunity Type: Hybrid (Bengaluru) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Astronomer, Pandas/Pyspark/Dask, RESTful API, Snowflake, Docker, Python, SQL Good to have skills : CI/CD, Data Vizualization, Matplotlib, Prometheus, AWS, Kubernetes A Single Platform for Loans/Securities & Finance (One of Uplers' Clients) is Looking for: Senior Software Developer (Python) who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Job Summary We are seeking a highly skilled Senior Python Developer with expertise in large-scale data processing and Apache Airflow. The ideal candidate will be responsible for designing, developing, and maintaining scalable data applications and optimizing data pipelines. You will be an integral part of our R&D and Technical Operations team, focusing on data engineering, workflow automation, and advanced analytics. Key Responsibilities Design and develop sophisticated Python applications for processing and analyzing large datasets. Implement efficient and scalable data pipelines using Apache Airflow and Astronomer. ¢ Create, optimize, and maintain Airflow DAGs for complex workflow orchestration. ¢ Work with data scientists to implement and scale machine learning models. ¢ Develop robust APIs and integrate various data sources and systems. ¢ Optimize application performance for handling petabyte-scale data operations. ¢ Debug, troubleshoot, and enhance existing Python applications. ¢ Write clean, maintainable, and well-tested code following best practices. ¢ Participate in code reviews and mentor junior developers. ¢ Collaborate with cross-functional teams to translate business requirements into technical solutions. Required Skills & Qualifications ¢ Strong programming skills in Python with 5+ years of hands-on experience. ¢ Proven experience working with large-scale data processing frameworks (e.g., Pandas, PySpark, Dask). ¢ Extensive hands-on experience with Apache Airflow for workflow orchestration. ¢ Experience with Astronomer platform for Airflow deployment and management. ¢ Proficiency in SQL and experience with Snowflake database. ¢ Expertise in designing and implementing RESTful APIs. ¢ Basic knowledge of Java programming. ¢ Experience with containerization technologies (Docker). ¢ Strong problem-solving skills and the ability to work independently. Preferred Skills ¢ Experience with cloud platforms (AWS). ¢ Knowledge of CI/CD pipelines and DevOps practices. ¢ Familiarity with Kubernetes for container orchestration. ¢ Experience with data visualization libraries (Matplotlib, Seaborn, Plotly). ¢ Background in financial services or experience with financial data. ¢ Proficiency in monitoring tools like Prometheus, Grafana, and ELK stack. Engagement Type: Fulltime Direct-hire on Riskspan Payroll Job Type: Permanent Location: Hybrid (Bangalore Working time: 11:00 AM to 8:00 PM Interview Process - 3- 4 Rounds How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: RiskSpan uncovers insights and mitigates risk for mortgage loans and structured products. The Edge Platform provides data and predictive models to run forecasts under a range of scenarios and analyze Agency and non-Agency MBS, loans, and MSRs. Leverage our bleeding-edge cloud, machine learning, and AI capabilities to scale faster, optimize model builds, and manage information more efficiently. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 3 months ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Duration: 8Months Job Type: Contract Work Type: Onsite The top 3 Responsibilities: Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda, Glue, Apache Airflow etc. Build and deliver high quality data architecture and pipelines to support business analyst, data scientists, and customer reporting needs. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Leadership Principles: Ownership, Customer obsession, Dive Deep and Deliver results Mandatory requirements: 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL & SQL Tuning Basic to Mid-level proficiency in scripting with Python Education or Certification Requirements: Any Graduation

Posted 3 months ago

Apply

8.0 - 12.0 years

13 - 20 Lacs

Chennai

Work from Office

We are looking for an experienced Python ETL Developer to design, develop, and optimize data pipelines. The ideal candidate should have expertise in Python, PySpark, Airflow, and data processing frameworks, along with the ability to work independently and communicate effectively in English. Roles & Responsibilities : - Develop and maintain ETL pipelines using Python, NumPy, Pandas, PySpark, and Apache Airflow. - Work with large-scale data processing and transformation workflows. - Optimize and enhance ETL performance and scalability. - Collaborate with data engineers and business teams to ensure efficient data flow. - Troubleshoot and debug ETL-related issues to ensure data integrity and reliability. Qualifications & Skills : - 8+ years of Python experience, with 5+ years dedicated to Python ETL development. - Proficiency in PySpark, Apache Airflow, NumPy, and Pandas. - Experience in working with SQLAlchemy and FastAPI (added advantage). - Strong problem-solving skills and the ability to work independently. - Good English communication skills to collaborate with global teams. Preferred Qualifications : - Experience in cloud-based ETL solutions (AWS, GCP, Azure). - Knowledge of big data technologies like Hadoop, Spark, or Kafka.

Posted 3 months ago

Apply

10.0 - 15.0 years

16 - 25 Lacs

Bengaluru

Hybrid

Experience : 10+ Years Location : Bangalore, Hybrid Notice Period : Immediate Joiners to 30 days Mode of Interview : 2-3 rounds Key Skills : Snowflake or Databricks, Python or Java, Cloud, Data Modeling Primary Skills : - Erwin tool - Data Modeling (Logical, Physical) - 2+ years - Snowflake - 2+ years - Warehousing Concepts - Any cloud Secondary Skills : - dbt cloud - Airflow - AWS Job Description : This is a hands-on technology position for a data technology leader with specialized business knowledge in the middle/front office areas. The candidate is someone with a proven record of technology project execution for data on the cloud, able to get hands-on when it comes to analysis, design, and development, and has creativity and self-motivation to deliver on mission-critical projects. These skills will help you succeed in this role : - Having 10+ Years of experience in an application development team with hands-on architecting, designing, developing, and deployment skillset. - Have demonstrated ability to translate business requirements in a technical design and through to implementation. - Experienced Subject Matter Expert in designing & architecting Big Data platforms services, and systems using Java/Python, SQL, Databricks, Snowflake, and cloud-native tools on Azure and AWS. - Experience in event-driven architectures, message hub, MQ, Kafka. - Experience in Kubernetes, ETL tools, Data as a Service, Star Schema, Dimension modeling, OLTP, ACID, and data structures is desired. - Proven Experience with Cloud and Big Data platforms, building data processing applications utilizing Spark, Airflow, Object storage, etc. - Ability to work in an on-shore/off-shore model working with development teams across continents. - Use coding standards, secured application development, documentation, Release and configuration management, and expertise in CI/CD. - Well-versed in SDLC using Agile Scrum. - Plan and execute the deployment of releases. - Ability to work with Application Development, SQA, and Infrastructure team. - Strong leadership skills, analytical problem-solving skills along with the ability to learn and adapt quickly. - Self-motivated, quick learner, and creative problem solver, organized, and responsible for managing a team of dev engineers. Education & Preferred Qualifications : - Bachelor's degree and 6 or more years of experience in Information Technology. - Strong team ethics and team player. - Cloud certification, Databricks, or Snowflake Certification is a plus. - Experience in evaluating software estimating cost and delivery timelines and managing financials. - Experience leading agile delivery & adhering to SDLC processes is required. - Work closely with the business & IT stakeholders to manage delivery. Additional Requirements : - Ability to lead delivery, manage team members if required, and provide feedback. - Ability to make effective decisions and manage change. - Communicates effectively in a professional manner both written and orally. - Team player with a positive attitude, enthusiasm, initiative, and self-motivation.

Posted 3 months ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Patna

Work from Office

Role : Data Engineer We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations.Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 4 months ago

Apply

5 - 10 years

25 - 30 Lacs

Bengaluru

Hybrid

Job Title: Senior Data Engineer Overview: We are seeking a highly skilled and experienced Senior Data Engineer to join our data team. This role is pivotal in designing, building, and maintaining robust data infrastructure to support analytics, reporting, and machine learning initiatives. The ideal candidate will have strong technical expertise in data engineering tools and best practices, and a passion for transforming raw data into actionable insights. Responsibilities: Design, develop, and optimize scalable data pipelines to process large volumes of structured and unstructured data. Build and maintain efficient ETL (Extract, Transform, Load) workflows and automate data integration from various sources. Manage and optimize data warehousing solutions to ensure high availability and performance. Collaborate with machine learning engineers to deploy and maintain ML models in production environments. Ensure high standards of data quality, governance, and consistency across all data systems. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and enhance data models. Monitor, troubleshoot, and improve data infrastructure for reliability and scalability. Qualifications: Proven experience in data engineering with a strong grasp of SQL, Python, and Apache Spark. Hands-on experience in designing and implementing ETL pipelines and data models. Proficient with cloud platforms such as AWS, Google Cloud Platform (GCP), or Microsoft Azure. Deep understanding of big data technologies, including Hadoop, Kafka, Hive, and related tools. Strong problem-solving skills and the ability to work collaboratively in a team-oriented environment. Bachelors or Master’s degree in Computer Science, Engineering, or a related technical field. Preferred Qualifications (Optional): Experience with orchestration tools such as Apache Airflow or Prefect. Familiarity with containerization (Docker, Kubernetes). Knowledge of data security and compliance standards (e.g., GDPR, HIPAA).

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Pune

Work from Office

Role : Data Engineer We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Lucknow

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Bengaluru

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Mumbai

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Surat

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Kanpur

Work from Office

Role : Data EngineerWe are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Hyderabad

Work from Office

Role : Data EngineerWe are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills.Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Nagpur

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Chennai

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Ahmedabad

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Kolkata

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.

Posted 4 months ago

Apply

7 - 10 years

8 - 14 Lacs

Jaipur

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.

Posted 4 months ago

Apply

3 - 5 years

25 - 35 Lacs

Bengaluru

Remote

Data Engineer Experience: 3 - 5 Years Exp Salary : Upto INR 35 Lacs per annum Preferred Notice Period : Within 30 Days Shift : 10:30AM to 7:30PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Apache Airflow, Spark, AWS, Kafka, SQL Good to have skills : Apache Hudi, Flink, Iceberg, Azure, GCP Nomupay (One of Uplers' Clients) is Looking for: Data Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Design, build, and optimize scalable ETL pipelines using Apache Airflow or similar frameworks to process and transform large datasets efficiently. Utilize Spark (PySpark), Kafka, Flink, or similar tools to enable distributed data processing and real-time streaming solutions. Deploy, manage, and optimize data infrastructure on cloud platforms such as AWS, GCP, or Azure, ensuring security, scalability, and cost-effectiveness. Design and implement robust data models, ensuring data consistency, integrity, and performance across warehouses and lakes. Enhance query performance through indexing, partitioning, and tuning techniques for large-scale datasets. Manage cloud-based storage solutions (Amazon S3, Google Cloud Storage, Azure Blob Storage) and ensure data governance, security, and compliance. Work closely with data scientists, analysts, and software engineers to support data-driven decision-making, while maintaining thorough documentation of data processes. Strong proficiency in Python and SQL, with additional experience in languages such as Java or Scala. Hands-on experience with frameworks like Spark (PySpark), Kafka, Apache Hudi, Iceberg, Apache Flink, or similar tools for distributed data processing and real-time streaming. Familiarity with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure for building and managing data infrastructure. Strong understanding of data warehousing concepts and data modeling principles. Experience with ETL tools such as Apache Airflow or comparable data transformation frameworks. Proficiency in working with data lakes and cloud based storage solutions like Amazon S3, Google Cloud Storage, or Azure Blob Storage. Expertise in Git for version control and collaborative coding. Expertise in performance tuning for large-scale data processing, including partitioning, indexing, and query optimization. NomuPay is a newly established company that through its subsidiaries will provide state of the art unified payment solutions to help its clients accelerate growth in large high growth countries in Asia, Turkey, and the Middle East region. NomuPay is funded by Finch Capital, a leading European and South East Asian Financial Technology investor. Nomu Pay has acquired WireCard Turkey on Apr 21, 2021 for an undisclosed amount. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: At Nomupay, we're all about making global payments simple. Since 2021, weve been on a mission to remove complexity and help businesses expand without limits. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 4 months ago

Apply

4 - 7 years

6 - 9 Lacs

Noida

Work from Office

Role Objective: We are seeking a Software Data Engineer with 4-7 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Qualifications : Deep knowledge and experience working with Python/Scala and Apache Spark Experienced in Azure data factory, Azure Data bricks, Azure Blob Storage, Azure Data Lake, Delta lake. Experienced in orchestration tool Apache Airflow . Experience working with SQL and NoSQL database systems such as MongoDB, Apache Parquet Experience with Azure cloud environments Experience with acquiring and preparing data from primary and secondary disparate data sources Experience working on large scale data product implementation. Experience working with agile methodology preferred. Healthcare industry experience preferred. Responsibilities : Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems.

Posted 4 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies