4 - 9 years
6 - 16 Lacs
Posted:1 week ago|
Platform:
Work from Office
Full Time
Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: Hadoop, Spark, Python, Data bricks Good to have: Java/Scala The Role: • Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. • Constructing infrastructure for efficient ETL processes from various sources and storage systems. • Leading the implementation of algorithms and prototypes to transform raw data into useful information. • Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. • Creating innovative data validation methods and data analysis tools. • Ensuring compliance with data governance and security policies. • Interpreting data trends and patterns to establish operational alerts. • Developing analytical tools, programs, and reporting mechanisms. • Conducting complex data analysis and presenting results effectively. • Preparing data for prescriptive and predictive modeling. • Continuously exploring opportunities to enhance data quality and reliability. • Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: • Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). • Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. • High proficiency in Scala/Java and Spark for applied large-scale data processing • Expertise with big data technologies, including Spark, Data Lake, and Hive. • Solid understanding of batch and streaming data processing techniques. • Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. • Expert-level ability to write complex, optimized SQL queries across extensive data volumes. • Experience on HDFS, Nifi, Kafka. • Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB • Familiarity with Agile methodologies. • Obsession for service observability, instrumentation, monitoring, and alerting. • Knowledge or experience in architectural best practices for building data lakes Interested candidates can share their resume at Neesha1@damcogroup.com
Damco Solutions
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
6.0 - 16.0 Lacs P.A.
Kolkata, Gurugram, Bengaluru
9.5 - 19.5 Lacs P.A.
Hyderabad
16.0 - 22.5 Lacs P.A.
Hyderabad, Chennai, Bengaluru
7.0 - 17.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
9.0 - 18.0 Lacs P.A.
Bengaluru
13.0 - 17.0 Lacs P.A.
Bengaluru
30.0 - 35.0 Lacs P.A.
Bengaluru
12.0 - 16.0 Lacs P.A.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
25.0 - 30.0 Lacs P.A.