Posted:21 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Data / ETL Architect

Experience: 14+ years

Skills : Databricks, ETL, Pyspark, Python, Presales


Requirements

  • Minimum eight years of relevant experience as a data architect or data engineer building large-scale data solutions.
  • P&C domain experience a must.
  • Bachelor’s degree in engineering, Information Technology, Computer Science, or a related field.
  • Experience in architecting and large data modernization, data migration, data warehousing – experience with cloud-based data platforms (like Snowflake).
  • Experience with defining and operationalizing data strategy, data governance, data lineage and quality standards.
  • Extensive knowledge of data engineering, data integration and data management concepts (i.e. APIs, ETL, MDM, CRUD, Pub/Sub, etc.)
  • Experience with data modelling.
  • Experience with structured and hierarchical datasets (i.e. JSON, XML, etc.)
  • Engineering experience with large scale system integration and analytics projects
  • Consulting mindset – highly collaborative, highly communicative approach with an eye on influence, rather than control.
  • Ability to work on high-level strategy and low-level tactical integration along with stakeholders at all levels of the organization.
  • Ability to communicate complex systems and concepts through pictures.
  • Clear and concise communication skills – both written and oral.
  • Remains unbiased to specific technology or vendor – more interested in results.



  • Should have 15+ years of experience with last 4 years in implementing Cloud native Data Solutions for variety of data consumption needs such as Modern Data warehouse, BI, Insights and Analytics
  • Should have experience in architecture and implementing End to End Modern Data Solutions using AWS and advance data processing frameworks like Databricks etc.
  • Strong knowledge of cloud native data platform architectures, data engineering and data management
  • Good knowledge of popular database and data warehouse technologies from Snowflake and AWS
  • Demonstrated knowledge of data warehouse concepts. Strong understanding of Cloud native databases, columnar database architectures
  • Ability to work with Data Engineering teams, Data Management Team, BI and Analytics in a complex development IT environment.
  • Good appreciation and at least one implementation experience on processing substrates in Data Engineering - such as ETL Tools, Confluent Kafka, ELT techniques
  • Exposure to varying databases – NoSQL (at very minimum Key value stores and/or Document stores), Appliances. Be able to cite implementation experiences constraints and performance challenges in practice.
  • Preferable (Nice to have): Implementing analytic models using AWS SageMaker for production workloads.
  • Data Mesh and Data Products designing, and implementation knowledge will be an added advantage.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, telangana, india

hyderabad, telangana

hyderabad, telangana, india

noida, uttar pradesh

hyderabad, telangana, india

hyderabad, telangana, india