Staff / Sr. Staff Engineer, DEM

8 - 13 years

25 - 30 Lacs

Posted:4 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

About Netskope

About the role

Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience.

The Digital Experience Management (DEM Engineering team) is responsible for building data ingestions, analytics, API and AI/ML on timeseries network, user and application telemetry data generated from real user monitoring (RUM), synthetic monitoring, endpoint monitoring from Netskope SASE platform . We work closely with engineers and the product team to build solutions solving real world problems of Network Operators and IT Admins.

Whats in it for you

As part of the Digital Experience Management team, you will work on state-of-the-art, cloud-scale distributed systems at the intersections of networking, cloud security and big data. You will be part of designing and building systems that provide critical infrastructure for global Fortune 100 companies.

What you will be doing:

1. Architecting and Building Distributed Data Systems

  • Design and implement large-scale distributed platforms, microservices, and frameworks.
  • Build data ingestion pipelines that can handle millions of telemetry events daily, both streaming and batch.
  • Ensure systems are fault-tolerant, highly available, and cost-efficient at scale.

2. Translating Complex Business Needs into Software

  • Partner closely with the product team to understand complex operational and analytical requirements.
  • Convert these into usable, performant, and maintainable technical solutions.

3. Technical Leadership

  • Serve as a technical mentor and architectural guide for senior developers.
  • Lead architecture reviews, design discussions, and code reviews.
  • Influence engineering practices and promote best-in-class observability, reliability, and security.

4. Innovation in DEM and SASE

  • Build solutions that enhance user experience monitoring correlating data across network, endpoint, and cloud layers.
  • Integrate AI/ML models for root cause analysis, anomaly detection, and forecasting on time series telemetry data.
  • Continuously optimize data reliability, latency, and insight accuracy.

Required skills and experience

Core Technical Expertise

  • 8+ years building scalable distributed systems in cloud-native environments.
  • Expert-level ability to design and deliver complex technical solutions from architecture to production.
  • Hands-on experience with data pipelines that handle massive throughput both streaming (Kafka, Flink, Spark) and batch (ETL frameworks).
  • Big Data Architecture expertise: data modeling, ingestion, transformation, and storage optimization (especially with systems like ClickHouse, Redis, Kafka).
  • Experience with ReST / OpenAPI

Programming and Systems Design

  • Strong in Go, Python, Java with advanced system design and algorithmic problem-solving.
  • Deep understanding of networking and security protocols: TCP/IP, TLS, IPSec, GRE, PKI, DNS, BGP, routing.
  • Strong grasp of web performance and telemetry concepts (latency, page load, route optimization).

Cloud, Containerization, and SRE

  • Proven experience designing/deploying on AWS or other cloud providers.
  • Expertise in Docker and Kubernetes orchestration.
  • Deep understanding of SRE principles monitoring, alerting, SLIs/SLOs, and incident management.
  • History of driving performance improvements, cost optimization, and reliability.

Leadership and Communication

  • Ability to mentor, influence, and set technical direction across teams.
  • Ownership of a major product area
  • Excellent communication and documentation skills for diverse audiences.
  • Proven track record of cross-functional collaboration with product, operations, and data science teams.

Good to have

  • Hands-on experience building APM, NPM, or DEM products.
  • Prior work with AI/ML for time series analytics (root cause, anomaly detection, forecasting).
  • Open source contributions related to big data, observability, or distributed systems.
  • Advanced degree (MSCS or equivalent).

What Makes This Role Unique

This is not just another backend or data role it s at the intersection of cloud, network, and data intelligence.
You ll be shaping the core observability and performance layer for some of the world s largest enterprise networks.

  • It s deeply technical, but also strategic and influential.
    It blends big data, cloud-native distributed systems, and AI/ML insights all critical to Netskope s SASE vision.

Education

  • BSCS or equivalent required, MSCS or equivalent strongly preferred

#LI-JB3

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
NetSkope Software logo
NetSkope Software

Cloud Security

San Francisco