About the Position
This is a technical leadership opportunity for experienced Software Engineers to join our fast growing Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this high impact role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As a member of the Data Platform team, you will be responsible for designing, building, and deploying the systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake.
We are looking for experienced Software Engineers who can help architect and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have an outsized impact on the direction, design & implementation of the solutions to these problems.
Job Duties and Responsibilities:
- Design, implement and own data-intensive, high-performance, scalable platform components
- Work with engineering teams, architects and cross functional partners to help drive the technical vision
- Conduct and participate in design reviews, code reviews, analysis, and performance tuning
- Coach and mentor engineers to help scale up the engineering organization
- Debug critical production issues across services and multiple levels of the stack
Required Knowledge, Skills, and Abilities:
- 4+ years of experience in object-oriented language, preferably Java
- Hands-on experience using a cloud-based distributed computing technologies including
- Messaging systems such as Kinesis, Kafka
- Data processing systems like Flink, Spark, Beam
- Storage & Compute systems such as Snowflake, Hadoop
- Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos
- Experience in developing and tuning highly scalable distributed systems
- Excellent grasp of software engineering principles
- In-depth understanding of multithreading, garbage collection and memory management
- Detail oriented, have a proven ability to be a technical lead for a project
- Experience with reliability engineering specifically in areas such as data quality, data observability and incident management
Nice to have
- Maintained security, encryption, identity management, or authentication infrastructure
- Leveraged major public cloud providers to build mission-critical, high volume services
- Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems.
- Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop
- Experience developing Kubernetes based services on AWS Stack