3 - 8 years

7 - 11 Lacs

Posted:-1 days ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

 
About the Role
Are you ready to support us in building core data products for use in shaping the future of research publishing
Springer Nature is seeking a Data Engineer to join the Analytics Center of Excellence team, within SN Data - Data Competence Center. You will work closely with other engineers, data scientists and analysts to build and scale the data infrastructure behind our analytics products, which enable users to make data driven decisions to improve our products, services and platforms. We re looking for a blend of skills and attributes that make you a great fit for this role. If you don t tick every box, don t worry - we provide tailored learning and development programs to help you grow and succeed with us.
Role Responsibilities
  • Design and implement and optimize production data solutions, such as scalable data pipelines to create data products to meet business use cases. Existing data pipelines include both batch and streaming data using Apache Beam (Dataflow) and enabling efficient ETL/ELT of structured and unstructured data.
  • Architect and maintain end-to-end data infrastructure on Google Cloud Platform (GCP), ensuring quality checks, performance, scalability, and security.
  • Develop and manage data orchestration workflows using Apache Airflow, automating pipeline scheduling and monitoring.
  • Build and deploy containerized backend Python-based APIs backed by databases.
  • Work collaboratively with other engineers, using techniques like pair and ensemble programming, to foster collective code ownership.
  • Collaborate with Data Scientists, Analysts, and other team members as relevant to translate business requirements into scalable data solutions.
  • Maintain and optimize CI/CD pipelines, ensuring secure and automated deployments.
  • Support and enhance ML/AI solutions at scale, including deployment and monitoring of models in production environments.
Experience, Skills Qualifications
Essential
  • Bachelor s degree in Engineering, Computer Science, or a related quantitative field.
  • 3+ years of experience in data engineering with a strong focus on a cloud platform (GCP experience preferable, including BigQuery, Dataflow, Dataform, Cloud Functions, Cloud Run, Pub/Sub, and Cloud Composer but AWS/Azure also welcome).
  • Strong SQL skills and proficiency in programming languages such as Python.
  • Experience managing data pipeline using tools like Apache Beam, Airflow, and Docker/Kubernetes.
  • Experience with CI/CD tools, Terraform, and GitHub Actions.
  • Excellent problem-solving skills and ability to work independently in a fast-paced environment.
Desirable
  • An understanding of decentralized Data Mesh and Data Product architecture principles.
  • Experience building and testing data pipelines with DBT
  • Exposure to Vertex AI, Dash, and other ML/AI tools.
  • Experience with DevOps practices, infrastructure as code, and secure deployment workflows.
  • Strong communication and collaboration skills.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, chennai

hyderabad, telangana, india