Posted:18 hours ago|
Platform:
On-site
Full Time
? Lead engagements with the internal stakeholders to understand their data needs and design solutions ensuring data quality and governance.
? Design the right pipeline architecture to handle data and support various use cases, including analytical reporting and machine learning.
? Design and evolve scalable and modular enterprise data architecture (including data lakes, data warehouses, real-time streaming, and operational data stores) to support current and future data needs.
? Lead end-to-end architecture for data platforms including ingestion, storage, modeling, access, and integration.
? Define and enforce data modeling standards and practices (e.g., star/snowflake schema, 3NF, Data Vault), ensuring consistency, maintainability, and performance.
? Partner with cross-functional teams (Engineering, Analytics, Data science, Product, Sales, Marketing etc) to translate business requirements into reliable and robust data architectures.
? Design architecture to support both batch and real-time processing needs using tools like Kafka, dbt, Airflow, etc
? Evaluate and integrate emerging technologies, open-source tools, and vendor solutions to improve architectural robustness and delivery efficiency.
We realize applying for jobs can feel daunting at times. Even if you don't check all the boxes in the job description, we encourage you to apply anyway.
? 10+ years of experience designing and implementing enterprise-scale data solutions, including data warehouses, data lakes, and real-time systems.
? Hands-on of hands-on experience building and orchestrating ETL/ELT data pipelines using tools such as dbt, Apache Airflow, Glue, Airbyte, Matillion, or Stitch.
? 4+ years of experience in data modeling, schema design, performance optimization, and database architecture for both OLTP and OLAP systems.
? Strong proficiency in SQL, with demonstrated ability to write, debug, and optimize complex queries for performance and scalability.
? Solid programming skills in Python (preferred) or Java, with experience in building data transformation and integration logic.
? Deep understanding of AWS data services (such as S3, RDS, DynamoDB, Glue, Lambda, EMR, ECS) and cloud data warehouse platforms like Snowflake.
? Expertise in performance tuning, pipeline debugging, and optimizing data workflows in large-scale environments.
? Experience deploying and maintaining enterprise data platforms that support analytics, BI, and machine learning use cases at scale.
? Familiarity with distributed data processing frameworks such as Apache Spark, Hadoop, and Apache Kafka for real-time and big data processing.
? Strong foundation in software engineering best practices, including version control, CI/CD, modular design, and testing.
? Proven ability to lead and deliver complex data architecture projects in cloud-native environments, with excellent stakeholder and cross-functional collaboration skills.
? Strong analytical thinking, communication, and technical leadership abilities.
? Good understanding of software engineering principles and standards.
Dautom
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
hyderābād
2.685 - 7.23 Lacs P.A.
bengaluru
5.7 - 10.5 Lacs P.A.
noida
7.2 - 9.38 Lacs P.A.
mumbai, maharashtra, india
Salary: Not disclosed
bengaluru, karnataka
Salary: Not disclosed
gurugram, haryana, india
Salary: Not disclosed
gurgaon, haryana, india
Salary: Not disclosed
pune, delhi / ncr, mumbai (all areas)
20.0 - 25.0 Lacs P.A.
bengaluru, karnataka, india
Salary: Not disclosed
bengaluru, karnataka, india
Salary: Not disclosed