6.0 years

0.0 Lacs P.A.

gurugram, haryana, india

Posted:1 week ago| Platform: Linkedin logo

Apply Now

Skills Required

datadesignstorageetlsoftwaresupportscalabilityreliabilityintegritygovernanceawsgcpazureresolvearchitectureengineeringpythonsqlscalajavapipelineapacheairflowredshiftrelationalnosqlpostgresqlmongodbdynamodbsparkkafkahadoop

Work Mode

On-site

Job Type

Full Time

Job Description

You will be responsible for: Design and develop scalable, efficient data pipelines for ingestion, transformation, and storage. Build and maintain robust ETL/ELT workflows across a variety of data sources and platforms. Collaborate with data analysts, scientists, and software engineers to support data initiatives. Optimize data systems for performance, scalability, and reliability. Ensure high data quality, integrity, and governance. Work with cloud-based data services (AWS/GCP/Azure) and data warehousing technologies. Monitor and troubleshoot production data flows and resolve issues proactively. Document data workflows, architecture, and standards. Skills & Experience Bachelor's degree in Computer Science, Engineering, or a related field. Minimum 6 years of hands-on experience as a Data Engineer. Strong proficiency in Python, SQL, and at least one of Scala or Java. Experience with data pipeline frameworks (e.g., Apache Airflow, DBT). Hands-on with cloud platforms like AWS (S3, Redshift, Glue), GCP, or Azure. Solid understanding of relational and NoSQL databases (e.g., PostgreSQL, MongoDB, DynamoDB). Familiarity with data warehousing concepts (Snowflake, BigQuery, Redshift, etc.). Working knowledge of big data tools (e.g., Apache Spark, Kafka, Hadoop) is a plus. Show more Show less

RecommendedJobs for You