Data Engineer

3 - 8 years

25 - 30 Lacs

Posted:2 months ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Join our Team About this opportunity: We are looking for a Senior Data Engineer with expertise in SAP HANA and Snowflake to design, develop, and manage scalable data pipelines and analytics solutions. This role involves data modelling, ETL/ELT development, access control, cloud infrastructure, and performance optimization to support real-time business intelligence and analytics. Key Responsibilities: 1. Data Engineering & Modelling Design and optimize data models in SAP HANA (Calculation Views, CDS Views) and Snowflake (Star/Snowflake Schema, Clustering). Develop ETL/ELT workflows for structured and semi-structured data (JSON, Parquet, Avro). Optimize query performance, storage, and compute costs. 2. Data Integration & Pipeline Automation Ingest data from SAP, APIs, Databases, Cloud Storage (AWS S3, ADLS, GCS). Automate data processing pipelines using Apache Airflow, or Snowflake Streams & Tasks. Enable real-time data ingestion and transformation. 3. Security & Access Management Implement Role-Based Access Control (RBAC), Row-Level Security (RLS), and Column-Level Masking. Manage IAM policies and authentication mechanisms (OAuth, SAML, LDAP). Monitor audit logs and access history for compliance. 4. Cloud & Infrastructure Management Deploy and manage Snowflake workloads on AWS, Azure, or GCP. Automate infrastructure provisioning using Terraform (IaC). Optimize warehouse scaling and auto-suspend configurations. 5. BI & Analytics Enablement Support real-time dashboards in Power BI. Integrate SAP HANA views with Snowflake for hybrid analytics. Collaborate with data analysts, BI teams, and business stakeholders. 6. Performance Optimization & Cost Control Tune queries, indexes, partitions, and caching strategies. Monitor compute consumption, warehouse usage, and cost optimization. Reduce data redundancy and optimize storage layers. What you bring: Strong hands-on experience with SAP HANA - data modelling, SQL scripting, and performance optimization Proficiency in SAP BODS (BusinessObjects Data Services) for ETL development, data migration, and data integration Expertise in Snowflake - data warehousing, schema design, performance tuning, and ELT processes Working knowledge of Python for scripting and automation Experience with AWS data services (S3, Glue, Redshift, Lambda, etc.) Exposure to PySpark for distributed data processing and large-scale data handling Strong analytical and problem-solving skills with the ability to troubleshoot data issues Excellent communication and collaboration skills to work effectively with cross-functional teams Ability to translate business requirements into technical solutions Proactive in identifying data quality issues and implementing robust data validation checks Experience working in agile environments and participating in sprint planning, reviews, and retrospectives Self-driven, with a continuous learning mindset and ability to adapt to evolving technologies

Mock Interview

Practice Video Interview with JobPe AI

Start Performance Tuning Interview Now

My Connections Cradlepoint

Download Chrome Extension (See your connection in the Cradlepoint )

chrome image
Download Now
Cradlepoint
Cradlepoint

Networking and Telecommunications

Boise

500+ Employees

221 Jobs

    Key People

  • George Mulhern

    CEO
  • Jeroen S. van Kooten

    Chief Financial Officer

RecommendedJobs for You

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

Hyderabad, Chennai, Bengaluru

Chennai, Tamil Nadu, India