Work from Office
Full Time
We are hiring a Staff Software Engineer to shape the future of data engineering at Gap Inc. In this role, you will design and deliver high-impact, cloud-native data solutions that power decision-making across our global retail business. You will work hands-on with cutting-edge technologies like Databricks, PySpark, and Azure/GCP cloud platforms, while mentoring peers, driving engineering best practices, and influencing architecture decisions. This role is ideal for engineers who thrive on solving complex data challenges, collaborating with diverse stakeholders, and continuously pushing the boundaries of scalability, automation, and innovation. As part of GapTechs growing Hyderabad hub, you will play a critical role in building the next generation of data platforms that fuel our iconic brands worldwide
Partner with Product Managers and Solution Architects to define technical requirements and translate them into scalable designs. • Design, build, and maintain reliable cloud-based data pipelines integrating multiple business applications. • Collaborate with business stakeholders to capture and fulfill data requirements for analytics and operations. • Guide team members on applying design patterns, coding standards, and code quality metrics. • Conduct peer code reviews and enforce best practices in software engineering. • Drive continuous improvement in coding, testing, and automation practices. • Work in a DevOps environment with CI/CD pipelines, automated releases, and application monitoring. • Perform root cause analysis on data and processes to identify and resolve issues. • Recommend emerging technologies and solutions that improve platform capabilities. • Contribute technical inputs to Statements of Work for external service providers.
911 years of experience in a Data Engineering role. • Graduate degree in Computer Science or equivalent. • Strong analytical thinking, logical reasoning, and problem-solving skills. • Hands-on expertise with cloud-based data engineering platforms: o Azure: Databricks, PySpark, Data Lake, Unity Catalog, ADF o Or equivalent GCP stack: BigQuery, Dataflow, Composer, etc. • Proficiency in building scalable, reliable data pipelines across multiple business applications. • Experience with relational and non-relational databases, data streams, and file stores. • Proficiency in automation and scripting using Linux shell and Python. • Strong familiarity with version control and CI/CD tools (GitHub, Jenkins). Good-to-Have Skills • Experience with reporting and BI tools such as Power BI or MicroStrategy. • Familiarity with enterprise integration patterns. • Exposure to DevOps practices including monitoring and observability. • Knowledge of data governance practices and cataloging. • Experience supporting business users with analytics and reporting needs. • Strong communication skills and ability to collaborate in cross-functional teams.
GAP
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowbengaluru, karnataka, india
Experience: Not specified
Salary: Not disclosed
bengaluru, karnataka, india
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
bengaluru, karnataka, india
Salary: Not disclosed
hyderabad
35.0 - 40.0 Lacs P.A.
chennai, tamil nadu, india
Salary: Not disclosed
bengaluru, karnataka, india
Salary: Not disclosed
chennai, tamil nadu, india
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed
bengaluru
9.0 - 13.0 Lacs P.A.