Job
Description
Data Operations Engineer
About the Role
Responsibilities
Operations Support
Monitor and triage production data pipelines, ingestion jobs, and transformation workflows (e.g. dbt, Fivetran, Snowflake tasks)
Manage and resolve data incidents and operational issues, working cross-functionally with platform, data, and analytics teams
Develop and maintain internal tools/scripts for observability, diagnostics, and automation of data workflows
Participate in on-call rotations to support platform uptime and SLAs
Data Platform Engineering Support
Help manage infrastructure-as-code configurations (e.g., Terraform for Snowflake, AWS, Airflow)
Support user onboarding, RBAC permissioning, and account provisioning across data platforms
Assist with schema and pipeline changes, versioning, and documentation
Assist with setting up monitoring on new pipelines in metaplane
Data & Analytics Engineering Support
Diagnosing model failures and upstream data issues
Collaborate with analytics teams to validate data freshness, quality, and lineage
Coordinate and perform backfills, schema adjustments, and reprocessing when needed
Manage operational aspects of source ingestion (e.g., REST APIs, batch jobs, database replication, kafka)
ML-Ops & Data Science Infrastructure
Collaborate with the data science team to operationalize and support ML pipelines, removing the burden of infrastructure ownership from the team
Monitor ML batch and streaming jobs (e.g., model scoring, feature engineering, data preprocessing)
Maintain and improve scheduling, resource management, and observability for ML workflows (e.g., using Airflow, SageMaker, or Kubernetes-based tools)
Help manage model artifacts, metadata, and deployment environments to ensure reproducibility and traceability
Support the transition of ad hoc or experimental pipelines into production-grade services
Qualifications
Required Qualifications
At least 4 years of experience in data engineering, DevOps, or data operations roles
Solid understanding of modern data stack components (Snowflake, dbt, Airflow, Fivetran, cloud storage)
Proficiency with SQL and comfort debugging data transformations or analytic queries
Basic scripting/programming skills (e.g., Python, Bash) for automation and tooling
Familiarity with version control (Git) and CI/CD pipelines for data projects
Strong troubleshooting and communication skills — you enjoy helping others and resolving issues
Experience with infrastructure-as-code (Terraform, CloudFormation)
Familiarity with observability tools such as datadog
Exposure to data governance tools and concepts (e.g., data catalogs, lineage, access control)
Understanding of ELT best practices and schema evolution in distributed data systems
Show more
Show less