Senior Data Engineer - ETL

5 years

0 Lacs

Posted:2 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

The Opportunity We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data team in Gurgaon. The ideal candidate will have a strong background in designing, building, and maintaining robust, scalable, and efficient data pipelines and data warehousing solutions. You will play a crucial role in transforming raw data into actionable insights, enabling data-driven decision-making across the Responsibilities : Data Pipeline Development : Design, develop, construct, test, and maintain highly scalable data pipelines using various ETL/ELT tools and programming languages (e.g., Python, Scala, Java). Data Warehousing : Build and optimize data warehouse solutions (e.g., Snowflake, Redshift, BigQuery, Databricks) to support reporting, analytics, and machine learning initiatives. Data Modeling : Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and design optimal data models (dimensional, relational, etc.). Performance Optimization : Identify and implement solutions for data quality issues, data pipeline performance bottlenecks, and data governance challenges. Cloud Technologies : Work extensively with cloud-based data platforms (AWS, Azure, GCP) and their respective data services (e.g., S3, EC2, Lambda, Glue, Data Factory, Azure Synapse, GCS, Dataflow, BigQuery). Automation & Monitoring : Implement automation for data pipeline orchestration, monitoring, and alerting to ensure data reliability and availability. Mentorship : Mentor junior data engineers, provide technical guidance, and contribute to best practices and architectural decisions within the data team. Collaboration : Work closely with cross-functional teams, including product, engineering, and business intelligence, to deliver data solutions that meet business needs. Documentation : Create and maintain comprehensive documentation for data pipelines, data models, and data Qualifications : Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 5+ years of professional experience in data engineering, with a strong focus on building and optimizing data pipelines and data warehousing solutions. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred. Extensive experience with SQL and relational databases. Demonstrated experience with cloud data platforms (AWS, Azure, or GCP) and their relevant data services. Strong understanding of data warehousing concepts (e.g., Kimball methodology, OLAP, OLTP) and experience with data modeling techniques. Experience with big data technologies (e.g., Apache Spark, Hadoop, Kafka). Familiarity with version control systems (e.g., Skills : Experience with specific data warehousing solutions like Snowflake, Redshift, or Google BigQuery. Knowledge of containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines for data solutions. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). Understanding of machine learning concepts and how data engineering supports ML workflows. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a collaborative team in a fast-paced environment (ref:hirist.tech) Show more Show less

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now

My Connections FiftyFive Technologies

Download Chrome Extension (See your connection in the FiftyFive Technologies )

chrome image
Download Now

RecommendedJobs for You