2 - 7 years

4 - 9 Lacs

Posted:2 weeks ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

What You ll Do


  • Design, build, and maintain data pipelines, ETL/ELT workflows, and scalable microservices.



  • Development of complex web scraping (playwright) and realtime pipelines (Kafka/Queues/Flink).



  • Develop end-to-end microservices with backend (Java 5+, Python 5+, Spring Batch 2+) and frontend (React or any).



  • Deploy, publish, and operate services in AWS ECS/Fargate using CI/CD pipelines (Jenkins, GitOps).



  • Architect and optimize data storage models in SQL (MySQL, PostgreSQL) and NoSQL stores.



  • Implement web scraping and external data ingestion pipelines.



  • Enable Databricks and PySpark-based workflows for large-scale analytics.



  • Build advanced data search capabilities (fuzzy matching, vector similarity search, semantic retrieval).



  • Apply ML techniques (scikit-learn, classification algorithms, predictive modeling) to data-driven solutions.



  • Implement observability, debugging, monitoring, and alerting for deployed services.



  • Create Mermaid sequence diagrams, flowcharts, and dataflow diagrams to document system architecture and workflows.



  • Drive best practices in fullstack data service development, including architecture, testing, and documentation.



What We re Looking For Minimum Qualifications


  • 2+ years of experience as a Data Engineer or a Software Backend engineering role.



  • Strong programming skills in Python, Scala, or Java.



  • Hands-on experience with HBase or similar NoSQL columnar stores.



  • Hands-on experience with distributed data systems like Spark, Kafka, or Flink.



  • Proficient in writing complex SQL and optimizing queries for performance.



  • Experience building and maintaining robust ETL/ELT pipelines in production.



  • Familiarity with workflow orchestration tools (Airflow, Dagster, or similar).



  • Understanding of data modeling techniques (star schema, dimensional modeling, etc.).



  • Familiarity with CI/CD pipelines (Jenkins or similar).



  • Ability to visualize and communicate architectures using Mermaid diagrams.



Bonus Points


  • Experience working with Databricks, dbt, Terraform, or Kubernetes.



  • Familiarity with streaming data pipelines or real-time processing.



  • Exposure to data governance frameworks and tools.



  • Experience supporting data products or ML pipelines in production.



  • Strong understanding of data privacy, security, and compliance best practices.



Why You ll Love Working Here

  • Data with purpose:


  • Modern tooling:


  • Collaborative culture:


Data Engineer, Dataops, Ecs, Etl, Python, Springboot

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Rarr Technologies logo
Rarr Technologies

Information Technology

San Francisco

RecommendedJobs for You