Sr. Data Engineer

4 - 6 years

0 Lacs

Vijay Nagar, Indore, Madhya Pradesh

Posted:2 weeks ago| Platform: Indeed logo

Apply

Skills Required

data python etl pyspark airflow aws automate support accessibility efficiency design sql database checks resolve processing security integrity compliance numpy sqlalchemy relational postgresql mysql pipeline development redshift azure git hadoop kafka visualization tableau power devops governance schedule

Work Mode

On-site

Job Type

Full Time

Job Description

Role: Data Engineer Location: Indore, Madhya Pradesh Experience: 4-6 Years Job Type: Full-time Job Summary: As a Data Engineer with a focus on Python, you'll play a crucial role in designing, developing, and maintaining data pipelines and ETL processes. You will work with large-scale datasets and leverage modern tools like PySpark, Airflow, and AWS Glue to automate and orchestrate data processes. Your work will support critical decision-making by ensuring data accuracy, accessibility, and efficiency across the organization Key Responsibilities: Design, build, and maintain scalable data pipelines using Python. Develop ETL processes for extracting, transforming, and loading data. Optimise SQL queries and database schemas for enhanced performance. Collaborate with data scientists, analysts, and stakeholders to understand data needs. Implement and monitor data quality checks to resolve any issues. Automate data processing tasks with Python scripts and tools. Ensure data security, integrity, and regulatory compliance. Document data processes, workflows, and system designs. Primary Skills: Python Proficiency: Experience with Python, including libraries such as Pandas, NumPy, and SQLAlchemy. PySpark: Hands-on experience in distributed data processing using PySpark. AWS Glue: Practical knowledge of AWS Glue for building serverless ETL pipelines. SQL Expertise: Advanced knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL). Data Pipeline Development: Proven experience in building and maintaining data pipeline and ETL processes. Cloud Data Platforms: Familiarity with cloud-based platforms like AWS Redshift, Google BigQuery, or Azure Synapse Data Warehousing: Knowledge of data warehousing and data modelling best practices. Version Control: Proficiency with Git. Preferred Skills: Big Data Technologies: Experience with tools like Hadoop or Kafka Data Visualization: Familiarity with visualisation tools (e.g., Tableau, Power BI). DevOps Practices: Understanding of CI/CD pipelines and DevOps practices. Data Governance: Knowledge of data governance and security best practices. Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,200,000.00 per year Schedule: Day shift Work Location: In person Application Deadline: 05/06/2025 Expected Start Date: 28/05/2025

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now

RecommendedJobs for You

Vadodara, Gujarat, India

Bengaluru East, Karnataka, India

Hyderabad, Telangana, India