Posted:3 weeks ago|
Platform:
Work from Office
Full Time
Key Responsibilities
Design, build, and manage scalable, reliable, and secure ETL/ELT data pipelines using tools such as Apache Spark, Apache Flink , Airflow, and Databricks.
Develop and maintain data architecture, ensuring efficient data modeling, warehousing, and data flow across systems.
Collaborate with data scientists, analysts, and business teams to understand data requirements and implement robust solutions.
Work with cloud platforms (AWS, Azure, or GCP) to build and optimize data lake and data warehouse environments (e.g., Redshift, Snowflake, BigQuery ).
Implement CI/CD pipelines for data infrastructure using tools such as Jenkins, Git, Terraform, and related DevOps tools.
Apply data quality and governance best practices to ensure accuracy, completeness, and consistency of data.
Monitor data pipelines, diagnose issues, and ensure data availability and performance.
Requirements
5- 8 years of proven experience in data engineering or related roles.
Strong programming skills in Python (including PySpark ) and SQL.
Experience with big data technologies such as Apache Spark, Apache Flink , Hadoop, Hive, and HBase.
Proficient in building data pipelines using orchestration tools like Apache Airflow.
Hands-on experience with at least one major cloud platform (AWS, Azure, GCP), including services like S3, ADLS, Redshift, Snowflake, or BigQuery .
Experience with data modeling, data warehousing, and real-time/batch data processing.
Familiarity with CI/CD practices, Git, and Terraform or similar infrastructure-as-code tools.
Ability to design for scalability, maintainability, and high availability.
Preferred Qualifications
Bachelors degree in Information Technology, Computer Information Systems, Computer Engineering, Computer Science
Certifications in cloud platforms (e.g., AWS Certified Data Analytics, Google Professional Data Engineer).
Experience with containerization tools such as Docker and orchestration with Kubernetes.
Workflows automation
Familiarity with machine learning pipelines and serving infrastructure.
Experience in implementing data governance, data lineage, and metadata management practices.
Exposure to modern data stack tools like dbt , Kafka, or Fivetran .
Razorthink
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowgurugram, haryana, india
Salary: Not disclosed
gurugram, haryana, india
Experience: Not specified
Salary: Not disclosed
hyderābād
3.0 - 8.08 Lacs P.A.
hyderābād
5.3125 - 7.875 Lacs P.A.
gurgaon
Salary: Not disclosed
gurgaon
5.56 - 7.5 Lacs P.A.
noida, uttar pradesh, india
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed
noida, uttar pradesh, india
Salary: Not disclosed
gurugram, haryana, india
Salary: Not disclosed