Posted:2 weeks ago|
Platform:
Work from Office
Full Time
We re looking for a Sr Big Data Engineer who expects more from their career. It s chance to extend and improve dunnhumby s Data Engineering Team. It s an opportunity to work with a market-leading business to explore new opportunities for us and influence global retailers. Key Responsibilities Design end-to-end data solutions, including data lakes, data warehouses, ETL/ELT pipelines, APIs, and analytics platforms. Architect scalable and low-latency data pipelines using tools like Apache Kafka, Flink, or Spark Streaming to handle high-velocity data streams. Design /Orchestrate end-to-end automation using orchestration frameworks such as Apache Airflow to manage complex workflows and dependencies. Design intelligent systems that can detect anomalies, trigger alerts, and automatically reroute or restart processes to maintain data integrity and availability. Develop scalable data architecture strategies that support advanced analytics, machine learning, and real-time data processing. Define and implement data governance, metadata management, and data quality standards. Lead architectural reviews and technical design sessions to guide solution development. Partner with business and IT teams to translate business needs into data architecture requirements. Explore appropriate tools, platforms, and technologies aligned with organizational standards. Ensure security, compliance, and regulatory requirements are addressed in all data solutions. Evaluate and recommend improvements to existing data architecture and processes. Provide mentorship and guidance to data engineers and technical teams. Technical Expertise Bachelors or masters degree in computer science, Information Systems, Data Science, or related field. 7+ years of experience in data architecture, data engineering, or a related field. Proficient in data pipeline tools such as Apache Spark, Kafka, Airflow, or similar. Experience with data governance frameworks and tools (e.g., Collibra, Alation, OpenMetadata). Strong knowledge of cloud platforms (Azure or Google Cloud), especially with cloud-native data services. Strong understanding of API design and data security best practices. Familiarity with data mesh, data fabric, or other emerging architectural patterns. Experience working in Agile or DevOps environments. Experience with modern data stack tools (e.g., dbt, Snowflake, Databricks). Extensive experience with high level programming languages - Python, Java & Scala Experience with Hive, Oozie, Airflow, HBase, MapReduce, Spark along with working knowledge of Hadoop/Spark Toolsets. Extensive Experience working with Git and Process Automation In depth understanding of relational database management systems (RDBMS) and Data Flow Development. Soft Skills Problem-Solving : Strong analytical skills to troubleshoot and resolve complex data pipeline issues. Communication : Ability to articulate technical concepts to non-technical stakeholders and document processes clearly. Collaboration : Experience working in cross-functional teams and managing stakeholder expectations. Adaptability : Willingness to learn new tools and technologies to stay ahead in the rapidly evolving data landscape.
Dunnhumby
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hyderabad
15.0 - 20.0 Lacs P.A.
Nagpur, Pune
20.0 - 30.0 Lacs P.A.
20.0 - 30.0 Lacs P.A.
15.0 - 25.0 Lacs P.A.
Gurugram
9.0 - 14.0 Lacs P.A.
Hyderabad, Telangana, India
Salary: Not disclosed
Hyderabad
22.5 - 30.0 Lacs P.A.
Gurgaon, Haryana, India
Salary: Not disclosed
Hyderabad, Telangana
Salary: Not disclosed
2.5 - 6.0 Lacs P.A.