Home
Jobs

4 Enterprise Reporting Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 5.0 years

18 - 20 Lacs

Hyderabad

Work from Office

Azure Synapse Pipelines (SQL/Apache Spark), Data Lake, and EDW Good to have skills Power BI Azure Data Factory (ADF) and Azure Data Bricks. Knowledge of CI/CD practices using GitHub, Azure DevOps, or similar tools. Summary: Skilled in Azure Data Engineer with hands-on experience in Azure Synapse Pipelines (SQL/Apache Spark), Data Lake, and Enterprise Data Warehouse (EDW) architecture. The role involves designing, building, and maintaining scalable data pipelines and analytical solutions in a modern Azure data ecosystem. Key Responsibilities: Design and develop data ingestion and transformation pipelines using Azure Synapse Pipelines, Apache Spark, and SQL. Integrate data from diverse sources into Azure Data Lake Storage (ADLS Gen2) and EDW environments. Develop and optimize data models and dataflows for enterprise reporting and analytics. Ensure scalable and high-performance ETL/ELT processing using Synapse and Spark pools. Collaborate with business analysts, data scientists, and BI developers to deliver clean, curated datasets. Monitor pipeline execution, troubleshoot failures, and manage performance tuning. Implement data governance, quality checks, and security controls across data pipelines. Maintain documentation, version control, and CI/CD processes for data workflows. Required Skills & Experience: Hands on experience in data engineering or ETL development. Strong hands-on experience with Azure Synapse Analytics (Pipelines, SQL Pools, Apache Spark Pools). Solid understanding of Azure Data Lake (Gen2) architecture and folder structures. Proficiency in SQL, T-SQL, and Apache Spark (PySpark or Scala) for data transformation. Experience working with Enterprise Data Warehouses (EDW) and dimensional modeling. Familiarity with Delta Lake, Parquet, JSON, and other big data file formats. Experience integrating data from relational databases, APIs, and flat files into Azure environments. Preferred Skills: Experience with Azure Data Factory (ADF) and Azure DataBricks. Knowledge of CI/CD practices using GitHub, Azure DevOps, or similar tools. Understanding data governance frameworks and data cataloging tools (e.g., Purview). Familiarity with Power BI or other visualization tools

Posted 17 hours ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Azure Synapse Pipelines (SQL/Apache Spark), Data Lake, and EDW Good to have skills Power BI Azure Data Factory (ADF) and Azure Data Bricks. Knowledge of CI/CD practices using GitHub, Azure DevOps, or similar tools. Summary: Skilled in Azure Data Engineer with hands-on experience in Azure Synapse Pipelines (SQL/Apache Spark), Data Lake, and Enterprise Data Warehouse (EDW) architecture. The role involves designing, building, and maintaining scalable data pipelines and analytical solutions in a modern Azure data ecosystem. Key Responsibilities: Design and develop data ingestion and transformation pipelines using Azure Synapse Pipelines, Apache Spark, and SQL. Integrate data from diverse sources into Azure Data Lake Storage (ADLS Gen2) and EDW environments. Develop and optimize data models and dataflows for enterprise reporting and analytics. Ensure scalable and high-performance ETL/ELT processing using Synapse and Spark pools. Collaborate with business analysts, data scientists, and BI developers to deliver clean, curated datasets. Monitor pipeline execution, troubleshoot failures, and manage performance tuning. Implement data governance, quality checks, and security controls across data pipelines. Maintain documentation, version control, and CI/CD processes for data workflows. Required Skills & Experience: Hands on experience in data engineering or ETL development. Strong hands-on experience with Azure Synapse Analytics (Pipelines, SQL Pools, Apache Spark Pools). Solid understanding of Azure Data Lake (Gen2) architecture and folder structures. Proficiency in SQL, T-SQL, and Apache Spark (PySpark or Scala) for data transformation. Experience working with Enterprise Data Warehouses (EDW) and dimensional modeling. Familiarity with Delta Lake, Parquet, JSON, and other big data file formats. Experience integrating data from relational databases, APIs, and flat files into Azure environments. Preferred Skills: Experience with Azure Data Factory (ADF) and Azure DataBricks. Knowledge of CI/CD practices using GitHub, Azure DevOps, or similar tools. Understanding data governance frameworks and data cataloging tools (e.g., Purview). Familiarity with Power BI or other visualization tools

Posted 17 hours ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Azure Synapse Pipelines (SQL/Apache Spark), Data Lake, and EDW Good to have skills Power BI Azure Data Factory (ADF) and Azure Data Bricks. Knowledge of CI/CD practices using GitHub, Azure DevOps, or similar tools. Summary: Skilled in Azure Data Engineer with hands-on experience in Azure Synapse Pipelines (SQL/Apache Spark), Data Lake, and Enterprise Data Warehouse (EDW) architecture. The role involves designing, building, and maintaining scalable data pipelines and analytical solutions in a modern Azure data ecosystem. Key Responsibilities: Design and develop data ingestion and transformation pipelines using Azure Synapse Pipelines, Apache Spark, and SQL. Integrate data from diverse sources into Azure Data Lake Storage (ADLS Gen2) and EDW environments. Develop and optimize data models and dataflows for enterprise reporting and analytics. Ensure scalable and high-performance ETL/ELT processing using Synapse and Spark pools. Collaborate with business analysts, data scientists, and BI developers to deliver clean, curated datasets. Monitor pipeline execution, troubleshoot failures, and manage performance tuning. Implement data governance, quality checks, and security controls across data pipelines. Maintain documentation, version control, and CI/CD processes for data workflows. Required Skills & Experience: Hands on experience in data engineering or ETL development. Strong hands-on experience with Azure Synapse Analytics (Pipelines, SQL Pools, Apache Spark Pools). Solid understanding of Azure Data Lake (Gen2) architecture and folder structures. Proficiency in SQL, T-SQL, and Apache Spark (PySpark or Scala) for data transformation. Experience working with Enterprise Data Warehouses (EDW) and dimensional modeling. Familiarity with Delta Lake, Parquet, JSON, and other big data file formats. Experience integrating data from relational databases, APIs, and flat files into Azure environments. Preferred Skills: Experience with Azure Data Factory (ADF) and Azure DataBricks. Knowledge of CI/CD practices using GitHub, Azure DevOps, or similar tools. Understanding data governance frameworks and data cataloging tools (e.g., Purview). Familiarity with Power BI or other visualization tools

Posted 17 hours ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

5+ years of experience working in data warehousing systems 3+ strong hands-on programming expertise in Databricks landscape, including SparkSQL, Workflows for data processing and pipeline development 3+ strong hands-on data transformation/ETL skills using Spark SQL, Pyspark, Unity Catalog working in Databricks Medallion architecture 2+ yrs work experience in one of cloud platforms: Azure, AWS or GCP Experience working in using Git version control, and well versed with CI/CD best practices to automate the deployment and management of data pipelines and infrastructure Nice to have hands-on experience building data ingestion pipelines from ERP systems (Oracle Fusion preferably) to a Databricks environment, using Fivetran or any alternative data connectors Experience in a fast-paced, ever-changing and growing environment Understanding of metadata management, data lineage, and data glossaries is a plus Must have report development experience using PowerBI, SplashBI or any enterprise reporting tool. Roles & Responsibilities: Involved in design and development of enterprise data solutions in Databricks, from ideation to deployment, ensuring robustness and scalability. Work with the Data Architect to build and maintain robust and scalable data pipeline architectures on Databricks using PySpark and SQL Assemble and process large, complex ERP datasets to meet diverse functional and non-functional requirements. Involve in continuous optimization efforts, implementing testing and tooling techniques to enhance data solution quality. Focus on improving performance, reliability, and maintainability of data pipelines. Implement and maintain PySpark and databricks SQL workflows for querying and analyzing large datasets Involve in release management using Git and CI/CD practices Develop business reports using SplashBI reporting tool leveraging the data from Databricks gold layer Qualifications Bachelors degree in computer science, Engineering, Finance or equivalent experience. Good communication skills.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies