Posted:2 months ago|
Platform:
Hybrid
Full Time
Job Description: Data Engineer Role Overview: We are seeking a highly skilled Data Engineer to join our team. The ideal candidate will have strong experience in Java, SQL databases, and Snowflake while also possessing knowledge of Data Lake, AWS Glue, and other cloud-based data processing technologies . This role involves designing, developing, and optimizing data pipelines to ensure efficient data flow and accessibility. Responsibilities: Design, build, and maintain scalable data pipelines for processing large datasets. Develop and optimize SQL queries for efficient data retrieval and transformation. Work with Snowflake to implement data warehousing solutions, ensuring data integrity and performance. Implement Data Lake architectures for structured and unstructured data storage. Leverage AWS Glue for ETL workflows, data transformation, and automation. Collaborate with data analysts, engineers, and business stakeholders to define and optimize data models. Ensure data security, compliance, and governance best practices in cloud-based environments. Monitor data pipeline performance and troubleshoot issues to maintain system reliability. Stay up to date with industry trends and best practices in data engineering and cloud technologies. Required Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. 3 -16 years of experience in data engineering, data warehousing, or related fields. Strong experience in Java programming for data processing and pipeline development. Proficiency in SQL databases (MySQL, PostgreSQL, SQL Server, or similar). Hands-on experience with Snowflake for data warehousing and analytics. Knowledge of Data Lake architectures for managing large-scale data storage. Experience with AWS Glue or similar cloud-based ETL tools. Familiarity with big data technologies (Spark, Hadoop, or similar) is a plus. Strong analytical and problem-solving skills with a keen eye for optimization. Excellent communication and collaboration skills to work with cross-functional teams. Preferred Skills (Nice to Have): Experience with Python or Scala for data processing. Familiarity with Apache Airflow or other workflow orchestration tools. Understanding of data governance, security, and compliance best practices . Knowledge of streaming technologies (Kafka, Kinesis) for real-time data processing. Must Have: Immediate joiners preferred." Candidates from Bangalore are preferred
Altimetrik
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Altimetrik
Chennai
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.
Hyderabad, Chennai, Bengaluru
9.5 - 15.0 Lacs P.A.
Bengaluru
7.0 - 17.0 Lacs P.A.
Hyderabad
15.0 - 30.0 Lacs P.A.
Pune
15.0 - 30.0 Lacs P.A.
Chennai, Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
10.0 - 19.0 Lacs P.A.
Hyderābād
2.51046 - 7.5 Lacs P.A.