Scala Data Engineer ( Databricks, Cloud & Big Data Platform Expertise)

5 - 10 years

19 - 25 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Summary

Synechron is seeking a skilled Scala Engineer to join our Data & AI (CEDA) team, supporting the development and deployment of scalable big data solutions. The successful candidate will leverage expertise in Scala and Databricks to build robust, extensible data solutions that serve global stakeholders with minimal localization. This role plays a critical part in enabling data-driven decision-making, platform engineering, and cloud-native development, contributing directly to our organizations strategic data initiatives.


Software Requirements

Required Software Skills:

  • Scala: Proven experience in developing production-level applications
  • Databricks: Solid understanding of Databricks platform operations and integration
  • SQL Spark SQL: Mastery in writing optimized queries and transformations for big data processing

Preferred Software Skills:

  • Java Python: Experience in programming with Java or Python for data engineering tasks
  • GitLab or equivalent CI/CD tools: Ability to develop and manage CI/CD pipelines
  • Containerization: Experience with Docker, Kubernetes for deployment and runtime environments

Overall Responsibilities
  • Design, develop, and optimize scalable big data pipelines using Apache Spark and Databricks
  • Implement complex data transformations and analysis solutions in Scala and Spark
  • Collaborate with cross-functional teams to understand data requirements and provide effective technical solutions
  • Develop, maintain, and enhance CI/CD pipelines to support continuous integration and deployment processes
  • Contribute to platform engineering activities on cloud platforms, particularly Microsoft Azure
  • Assist in containerizing applications and managing orchestration using Kubernetes
  • Ensure solutions are globally deployable with minimal localization, adhering to security and compliance standards
  • Participate in Agile sprint planning, stand-ups, and retrospective activities to promote iterative development

Technical Skills (By Category)

Programming Languages:

  • Required: Scala, SQL, Spark queries
  • Preferred: Java, Python

Databases Data Management:

  • Experience with relational and NoSQL databases
  • Proficient in writing and optimizing complex SQL/Spark queries and transformations

Cloud Technologies:

  • Strong experience with Microsoft Azure cloud platform (preferred)
  • Familiarity with cloud architecture best practices for scalability and security

Frameworks and Libraries:

  • Apache Spark, Databricks platform, Kubernetes (preferred)

Development Tools & Methodologies:

  • GitLab or similar version control and CI/CD systems
  • Agile development practices

Security & Compliance:

  • Knowledge of secure coding practices and data privacy standards (if applicable)

Experience Requirements
  • Minimum of 5+ years in data engineering or related big data development roles
  • Proven experience in executing complex data analysis and designing scalable data pipelines
  • Demonstrated expertise in building data transformations within SQL and Spark environments
  • Hands-on experience with platform engineering on cloud providers, particularly Microsoft Azure, is advantageous
  • Experience with containerization and orchestration tools like Docker and Kubernetes

Alternative Experience Pathways:

  • Candidates with extensive data engineering experience using Scala and Spark, even if cloud experience is limited, are encouraged to apply.
  • Experience in large-scale enterprise environments, financial institutions, or data-focused industries is preferred but not mandatory.

Day-to-Day Activities
  • Developing and maintaining big data processing pipelines using Spark and Databricks
  • Collaborating with data scientists, analysts, and other stakeholders to refine data requirements
  • Implementing and optimizing SQL/Spark queries for performance and scalability
  • Building and deploying CI/CD pipelines supporting automated testing and deployment
  • Containerizing applications and managing deployments on Kubernetes clusters
  • Participating in Agile ceremonies, planning sprints, and providing estimates on deliverables
  • Conducting code reviews, ensuring best practices, and maintaining high code quality

Qualifications

Educational Requirements:

  • Bachelors or Masters degree in Computer Science, Information Technology, Mathematics, or related fields

Certifications (Preferred):

  • Azure Data Engineer Certification or similar cloud data certifications

Training & Development:

  • Commitment to ongoing professional development in big data, cloud technologies, and engineering practices

Professional Competencies
  • Strong problem-solving and analytical capabilities in complex data environments
  • Effective communication skills, capable of building collaborative relationships with stakeholders
  • Ability to work both independently and as part of a team in an Agile setting
  • Adaptability to evolving technologies and project requirements
  • Proactive attitude towards learning and applying new tools and frameworks
  • Time management skills, with an emphasis on prioritization and meeting deadlines

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Synechron logo
Synechron

Information Technology and Services

New York

RecommendedJobs for You

hyderabad, pune, bengaluru