Posted:1 month ago|
Platform:
Work from Office
Full Time
Role & responsibilities Design, build, and maintain scalable and efficient data pipelines and ETL/ELT processes. Develop and optimize data models for analytics and operational purposes in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery). Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable datasets. Implement data quality checks, monitoring, and alerting for pipelines. Work with structured and unstructured data across various sources (APIs, databases, streaming). Ensure data security, compliance, and governance practices are followed. Write clean, efficient, and testable code using Python, SQL, or Scala. Support the development of data catalogs and documentation. Participate in code reviews and contribute to best practices in data engineering. Preferred candidate profile 3- 9 years of hands-on experience in data engineering or a similar role. Strong proficiency in SQL and Python, Pyspark. Experience with data pipeline orchestration tools like Apache Airflow, Prefect, or Luigi.( Any Of the skill) Familiarity with cloud platforms such as AWS, Azure, or GCP (e.g., S3, Lambda, Glue, BigQuery, Dataflow).( Any of the Skill) Experience with big data tools such as Spark, Kafka, Hive, or Hadoop.(ANy One) Strong understanding of relational and non-relational databases. Exposure to CI/CD practices and tools (e.g., Git, Jenkins, Docker). Excellent problem-solving and communication skills.
Clobminds
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Chennai
4.0 - 7.0 Lacs P.A.
Bengaluru
25.0 - 30.0 Lacs P.A.
Mumbai
4.0 - 5.0 Lacs P.A.
Pune, Chennai, Bengaluru
0.5 - 0.5 Lacs P.A.
Chennai, Malaysia, Malaysia, Kuala Lumpur
7.0 - 11.0 Lacs P.A.
5.0 - 8.0 Lacs P.A.
Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad
25.0 - 35.0 Lacs P.A.
Hyderabad
25.0 - 30.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.