Azure Data Engineer

4.0 years

0.0 Lacs P.A.

India

Posted:9 hours ago| Platform: Linkedin logo

Apply Now

Skills Required

azuredatadatabrickspysparkpipelinejenkinsgitlabdevopsprocessinganalysiscodedesignautomatetestingdeploymentdevelopmentintegrationgithubintegrityreliabilityextractionflowcollaborationcommunicationengineeringetlwritingrelationalsqldatafactory

Work Mode

On-site

Job Type

Full Time

Job Description

Mandatory Skills : Azure Cloud Technologies, Azure Data Factory, Azure Databricks (Advance Knowledge), PySpark, CI/CD Pipeline (Jenkins, GitLab CI/CD or Azure DevOps), Data Ingestion, SOL Seeking a skilled Data Engineer with expertise in Azure cloud technologies, data pipelines, and big data processing. The ideal candidate will be responsible for designing, developing, and optimizing scalable data solutions. Responsibilities Azure Databricks and Azure Data Factory Expertise:  Demonstrate proficiency in designing, implementing, and optimizing data workflows using Azure Databricks and Azure Data Factory.  Provide expertise in configuring and managing data pipelines within the Azure cloud environment. PySpark Proficiency:  Possess a strong command of PySpark for data processing and analysis.  Develop and optimize PySpark code to ensure efficient and scalable data transformations. Big Data & CI/CD Experience:  Ability to troubleshoot and optimize data processing tasks on large datasets. Design and implement automated CI/CD pipelines for data workflows.  This involves using tools like Jenkins, GitLab CI/CD, or Azure DevOps to automate the building, testing, and deployment of data pipelines. Data Pipeline Development & Deployment:  Design, implement, and maintain end-to-end data pipelines for various data sources and destinations.  This includes unit tests for individual components, integration tests to ensure that different components work together correctly, and end-to-end tests to verify the entire pipeline's functionality.  Familiarity with Github/Repo for deployment of code  Ensure data quality, integrity, and reliability throughout the entire data pipeline. Extraction, Ingestion, and Consumption Frameworks:  Develop frameworks for efficient data extraction, ingestion, and consumption.  Implement best practices for data integration and ensure seamless data flow across the organization. Collaboration and Communication:  Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.  Communicate effectively with stakeholders to gather and clarify data-related requirements. Requirements Bachelor’s or master’s degree in Computer Science, Data Engineering, or a related field. 4+ years of relevant hands-on experience in data engineering with Azure cloud services and advanced Databricks. Strong analytical and problem-solving skills in handling large-scale data pipelines. Experience in big data processing and working with structured & unstructured datasets. Expertise in designing and implementing data pipelines for ETL workflows. Strong proficiency in writing optimized queries and working with relational databases. Experience in developing data transformation scripts and managing big data processing using PySpark.. Skills: sol,azure,azure databricks,sql,pyspark,data ingestion,azure cloud technologies,azure datafactory,azure data factory,ci/cd pipeline (jenkins, gitlab ci/cd or azure devops),azure databricks (advance knowledge),ci/cd pipelines Show more Show less

TestUnity
Not specified
No locations

Employees

2 Jobs

RecommendedJobs for You