Big Data Engineer - Python/SQL/ETL

7 years

0 Lacs

Posted:18 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Key Responsibilities

  • Design, develop, and support robust ETL pipelines to extract, transform, and load data into analytical products that drive strategic organizational goals.
  • Develop and maintain data workflows on platforms like Databricks and Apache Spark using Python and Scala.
  • Create and support data visualizations using tools such as MicroStrategy, Power BI, or Tableau, with a preference for MicroStrategy.
  • Implement streaming data solutions utilizing frameworks like Kafka for real-time data processing.
  • Collaborate with cross-functional teams to gather requirements, design solutions, and ensure smooth data operations.
  • Manage data storage and processing in cloud environments, with strong experience in AWS cloud services.
  • Use knowledge of data warehousing, data modeling, and SQL to optimize data flow and accessibility.
  • Develop scripts and automation tools using Linux shell scripting and other languages as needed.
  • Ensure continuous integration and continuous delivery (CI/CD) practices are followed for data pipeline deployments using containerization and orchestration technologies.
  • Troubleshoot production issues, optimize system performance, and ensure data accuracy and integrity.
  • Work effectively within Agile development teams and contribute to sprint planning, reviews, and Skills & Experience :
  • 7+ years of experience in technology with a focus on application development and production support.
  • At least 5 years of experience in developing ETL pipelines and data engineering workflows.
  • Minimum 3 years hands-on experience in ETL development and support using Python/Scala on Databricks/Spark platforms.
  • Strong experience with data visualization tools, preferably MicroStrategy, Power BI, or Tableau.
  • Proficient in Python, Apache Spark, Hive, and SQL.
  • Solid understanding of data warehousing concepts, data modeling techniques, and analytics tools.
  • Experience working with streaming data frameworks such as Kafka.
  • Working knowledge of Core Java, Linux, SQL, and at least one scripting language.
  • Experience with relational databases, preferably Oracle.
  • Hands-on experience with AWS cloud platform services related to data engineering.
  • Familiarity with CI/CD pipelines, containerization, and orchestration tools (e.g., Docker, Kubernetes).
  • Exposure to Agile development methodologies.
  • Strong interpersonal, communication, and collaboration skills.
  • Ability and eagerness to quickly learn and adapt to new Qualifications :
  • Bachelors or Masters degree in Computer Science, Information Technology, or related fields.
  • Experience working in large-scale, enterprise data environments.
  • Prior experience with cloud-native big data solutions and data governance best practices.
(ref:hirist.tech)

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You