Data Architect

3 years

0 Lacs

Posted:9 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Main Purpose:


  • Act as the Data Architect to ensure the delivery of all commitments on time and in high quality.
  • Manage the data model and schema
  • Work closely with product and engineering teams to provide evaluation and recommendations of design patterns and solutions for data platforms with a focus on Graph based CDP, ETL and Lambda architectures
  • Provide technical inputs to agile processes, such as epic, story, and task definition to resolve issues and remove barriers throughout the lifecycle of client engagements
  • Work with the Marcel’s data, product and engineering teams to deliver on all requests pertaining to Data, Features, Graph Schema, Feeds and Integrations, Data Science Models and other data management related requirements.
  • Collaborate with feature teams to ensure features are delivered successfully end to end, ensuring all data lifecycle events are managed correctly and that SLAs are implemented as required.
  • Be responsible for driving the Data and Engineering team to deliver a feature requirement from inception to production.

Key responsibilities:


The key accountabilities for this role are, but not limited to;


  • Ensure the data models of the Marcel program are managed efficiently and that model enhancements are in alignment with the data modelling principles, standards and meta data practices
  • Ensure that all data lifecycle events are efficiently managed by the Marcel platform, aligning technology and feature teams accordingly.
  • Ensure that data quality is maintained in production through measurement and operationally supported
  • Work closely with feature teams to ensure that all analytics, data and architectures are in alignment with the Data Strategy.
  • Act as a point of contact and advisor on all data related features of Marcel and where relevant drive enhancements from concept through to production delivery.
  • Coaching and mentor others on best practices, data principles, performance.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs..
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.


Specific responsibilities:


  • Responsible for overall Data Architecture of the platform including Neo4j, SQL, Postgres, and any other integration points between data and application
  • Responsible for leading the team of data engineers to build data pipelines using a combination of Azure Data Factory and Databricks
  • Accountable for delivery of team commitments
  • Responsible for training and development of team members
  • Responsible for the design and architecture of feeds and data integrations
  • Responsible for sign off of deliverables
  • Responsible for establishing best practices and standards
  • Write maintainable and effective data feeds, and pipelines
  • Follow best practices for test driven environment, continuous integration.
  • Design, develop, test and implement end-to-end requirement
  • Contribute on all phases of development life cycle
  • Perform unit testing and troubleshooting applications

Key

competencies:


Must have skills:

  • Strong written and verbal communication skills
  • Strong experience in implementing Graph database technologies (property graph)
  • Strong experience in leading data modelling activities for a production graph database solution
  • Strong experience in Cypher (or Tinkerpop Gremlin) with understand of tuning
  • Strong experience working with data integration technologies, specifically Azure Services, ADF, ETLs, JSON, Hop or ETL orchestration tools.
  • Strong experience using PySpark, Scala, DataBricks
  • 3-5+ years’ experience in design and implementation of complex distributed systems architectures
  • Strong experience with Master Data Management solutions
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Strong knowledge Azure based services
  • Strong understanding of RDBMS data structure, Azure Tables, Blob, and other data sources
  • Experience with GraphQL
  • Experience in high availability and disaster recovery solutions
  • Experience with test driven development
  • Understanding of Jenkins, CI/CD processes using ADF, and DataBricks.
  • Strong analytical skills related to working with unstructured datasets.
  • Strong analytical skills necessary to triage and troubleshoot
  • Results-oriented and able to work across the organization as an individual contributor

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

pune, maharashtra, india

gurugram, haryana, india