Posted:2 months ago| Platform:
Work from Office
Full Time
About the Role: As a Senior Data Engineer at OptIQ, you will: - Design, build, and optimize scalable data pipelines to process petabyte-scale datasets efficiently. - Develop and maintain ETL pipelines for structured and unstructured data from multiple cloud and on-prem sources. - Architect, implement, and manage Big Data frameworks such as Apache Spark, Hadoop, and other distributed systems. - Set up end-to-end ML data pipelines, enabling seamless model training and inference. - Collaborate with Data Scientists, DevOps, and Security teams to develop robust data pipelines that power AI-driven security solutions. - Optimize query performance and data storage strategies for efficient retrieval and processing of massive datasets. - Lead best practices for data engineering, automation, and CI/CD integration to streamline data workflows. This role offers the chance to work on cutting-edge technologies, drive impactful solutions, and be a pivotal part of a company that is transforming the data security market. What You ll Bring Experience: 1-5 years in Data Engineering Role. Technical Expertise: - Big Data & Distributed Systems: Strong experience with Apache Spark, Hadoop, Kafka, and Flink. - ETL Pipelines: Expertise in building and maintaining ETL workflows to process structured and unstructured data. - ML Pipelines: Experience in developing data pipelines to support machine learning models, including feature engineering and model inference. - Data Pipeline Architecture: Proven track record of setting up scalable data pipelines from scratch and handling petabyte-scale data loads. - Cloud Data Platforms: Strong knowledge of AWS (Glue, Redshift, EMR), Azure (Data Lake, Synapse), or GCP (BigQuery, Dataflow). - Data Storage & Optimization: Experience with columnar storage formats (Parquet, ORC), data partitioning, and indexing strategies. - Programming: Proficiency in Python for data processing and scripting.CI/CD & DevOps: Familiarity with Terraform, CloudFormation, Docker, Kubernetes, and CI/CD pipelines for data workflows. - Monitoring & Logging: Experience with Prometheus, Grafana, ELK stack, or similar tools for monitoring data pipeline health. Mindset: - Entrepreneurial spirit with a "get-things-done" attitude and a commitment to delivering high-quality solutions. - Comfort working in an unstructured, fast-paced startup environment. - Passion for solving complex problems and driving innovation in a high-growth company. Education: B.Tech in Computer Science or equivalent.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Mumbai, Bengaluru, Gurgaon
INR 32.5 - 37.5 Lacs P.A.
Chennai, Pune, Mumbai, Bengaluru, Gurgaon
INR 35.0 - 42.5 Lacs P.A.
Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata
INR 8.0 - 12.0 Lacs P.A.
Pune, Bengaluru, Mumbai (All Areas)
INR 0.5 - 0.7 Lacs P.A.
INR 2.5 - 5.5 Lacs P.A.
INR 3.0 - 4.5 Lacs P.A.
Bengaluru
INR 3.0 - 3.0 Lacs P.A.
Bengaluru
INR 3.5 - 3.75 Lacs P.A.
INR 2.5 - 3.0 Lacs P.A.
INR 4.0 - 4.0 Lacs P.A.