Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Project Role :

Data Engineer

Project Role Description :

Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.

Must have skills :

Microsoft Azure Databricks

Good to have skills :

NA

Minimum 5 Year(s) Of Experience Is Required

Educational Qualification :

15 years full time educationSummary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting and optimizing existing data workflows to enhance performance and reliability. Key Responsibilities: 1. Function as the Lead Data Architect for a small, simple project/proposal or as a team lead for medium/large sized project or proposal 2. Discuss specific Big data architecture and related issues with client architect/team (in area of expertise) 3. Analyze and assess the impact of the requirements on the data and its lifecycle 4. Lead Big data architecture and design medium-big Cloud based, Big Data and Analytical Solutions using Lambda architecture. 5. Breadth of experience in various client scenarios and situations 6. Experienced in Big Data Architecture-based sales and delivery 7. Thought leadership and innovation 8. Lead creation of new data assets & offerings 9. Experience in handling OLTP and OLAP data workloads Technical Experience: 1. Experience working in Medallion architecture involving Delta lake house principles 2. Expert level in Designing and Architect solutions in Azure Data factory, Azure Databricks, Azure Datalake, Delta Lake implementation. 3. Expert level experience in Azure cloud technologies like PySpark, ADF, Databricks, Python, Scala and SQL. 4. Exp in one or more Real-time/batch ingestion including: Azure Delta live tables , Autoloader 5. Exp in handling medium to large Big Data implementations 6. Strong understanding of data strategy. Data Quality and Delta lake components 7. Candidate must have 7-10 years of IT experience and around 5 years of Big data experience (design + build) in Databricks 8. Architect for a medium sized client delivery project Professional Experience: 1. Should be able to drive the technology design meetings, propose technology design and architecture 2. Should have excellent client communication skills 3. Should have good analytical and problem-solving skills Educational Qualification: 1. Must have: BE/BTech/MCA 2. Good to have: ME/MTech

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Accenture in India logo
Accenture in India

Business Consulting and Services

Dublin 2 San Francisco

RecommendedJobs for You

hyderabad, telangana, india

noida, uttar pradesh, india

hyderabad, telangana, india

hyderabad, telangana, india