Staff Engineer - Data

8 years

3 Lacs

Posted:12 hours ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Part Time

Job Description

At SAFE Security, our vision is to be the Champions of a Safer Digital Future and the Catalysts of Change. We believe in empowering individuals and teams with the freedom and responsibility to align their goals, ensuring we all move forward together.

We operate with radical transparency, autonomy, and accountability—there’s no room for brilliant jerks. We embrace a culture-first approach, offering an unlimited vacation policy, a high-trust work environment, and a commitment to continuous learning. For us, Culture is Our Strategy—check out our Culture Memo to dive deeper into what makes SAFE unique.

We’re looking for a Staff Data Engineer who thrives on solving complex data challenges at scale. You’ll be the technical force multiplier, leading the design of data platforms, pipelines, and lakehouse architectures that fuel AI-driven cyber risk quantification globally. If you’ve been waiting for a role where you can set data strategy, lead bold ideas, and shape large-scale data ecosystems—this is it.

What You’ll Do:

    • Be the Data Tech Leader: Mentor engineers, champion data engineering best practices, and raise the bar for technical excellence across the org.
    • Architect at Scale: Design and lead petabyte-scale data ingestion, processing, and analytics platforms using Snowflake, Apache Spark, Iceberg, Parquet, and AWS-native services.
    • Own the Data Flow: Build streaming and batch pipelines handling billions of events daily, orchestrated through Apache Airflow for reliability and fault tolerance.
    • Set the Standards: Define frameworks for data modeling, schema evolution, partitioning strategies, and data quality/observability for analytics and AI workloads.
    • Code Like a Pro: Stay hands-on, writing high-performance data processing jobs in Python, SQL, and Scala, and conducting deep-dive reviews when it matters most.
    • Master the Lakehouse: Architect data lakes and warehouse solutions that balance cost, performance, and scalability, leveraging AWS S3 and Snowflake.
    • Solve Complex Problems: Elegantly and efficiently debug and optimize long-running jobs, data skew, and high-volume ETL bottlenecks.
    • Collaborate and influence: Work with the Product, AI/ML, and Platform teams to ensure that data solutions directly power real-time cyber risk analytics.
    • Innovate Constantly: Evaluate and introduce emerging data technologies (e.g., Flink, Druid, Rockset) to keep SAFE at the forefront of data engineering innovation.

What We’re Looking For:

    • 8+ years of experience in data engineering, with a proven track record of designing and scaling distributed data systems.
    • Deep expertise in big data processing frameworks (Apache Spark, Flink) and workflow orchestration (Airflow).
    • Strong hands-on experience with data warehousing (Snowflake) and data lakehouse architectures (Iceberg, Parquet).
    • Proficiency in Python, SQL, Scala, Go/Nodejs with an ability to optimize large-scale ETL/ELT workloads.
    • Expertise in real-time data ingestion pipelines using Kafka or Kinesis, handling billions of events daily.
    • Experience operating in cloud-native environments (AWS) and leveraging services like S3, Lambda, ECS, Glue, and Athena.
    • Strong understanding of data modeling, schema design, indexing, and query optimization for analytical workloads.
    • Proven leadership in mentoring engineers, driving architectural decisions, and aligning data initiatives with product goals.
    • Experience in streaming architectures, CDC pipelines, and data observability frameworks.
    • Ability to navigate ambiguous problems, high-scale challenges, and lead teams toward innovative solutions.
    • Proficient in deploying containerized applications (Docker, Kubernetes, ECS).
    • Familiarity with using AI Coding assistants like Cursor, Claude Code, or GitHub Copilot

Preferred Qualifications:

    • Exposure to CI/CD pipelines, automated testing, and infrastructure-as-code for data workflows.
    • Familiarity with real-time analytics engines (Druid, Pinot, Rockset) or machine learning data pipelines.
    • Contributions to open-source data projects or thought leadership in the data engineering community.
    • Prior experience in cybersecurity, risk quantification, or other high-scale SaaS domain
If you’re passionate about cyber risk, thrive in a fast-paced environment, and want to be part of a team that’s redefining security—we want to hear from you!

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You