Home
Jobs
Companies
Resume

4 Pythonspark Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 - 2 Lacs

Gurugram

Remote

Naukri logo

We are seeking an experienced AWS Data Engineer to join our dynamic team. The ideal candidate will have hands-on experience in building and managing scalable data pipelines on AWS, utilizing Databricks, and have a deep understanding of the Software Development Life Cycle (SDLC) and will play a critical role in enabling our data architecture, driving data quality, and ensuring the reliable and efficient flow of data throughout our systems. Required Skills: 7+ years comprehensive experience working as a Data Engineer with expertise in AWS services (S3, Glue, Lambda etc.). In-depth knowledge of Databricks, pipeline development, and data engineering. 2+ years of experience working with Databricks for data processing and analytics. Architect and Design the pipeline - e.g. delta live tables Proficient in programming languages such as Python, Scala, or Java for data engineering tasks. Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Experience with ETL/ELT tools and processes in a cloud environment. Familiarity with Big Data processing frameworks (e.g., Apache Spark). Experience with data modeling, data warehousing, and building scalable architectures. Understand/implement security aspects - consume data from different sources Preferred Qualifications: Experience with Apache Airflow or other workflow orchestration tools, Terraform , python, spark will be preferred AWS Certified Solutions Architect, AWS Certified Data Analytics Specialty, or similar certifications.

Posted 2 weeks ago

Apply

5 - 9 years

5 - 14 Lacs

Chennai, Bengaluru, Kolkata

Work from Office

Naukri logo

Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data processing pipelines using PySpark on AWS EMR. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Develop complex SQL queries to extract insights from HIVE tables and perform analysis. Ensure efficient data processing by optimizing Spark jobs through tuning parameters such as spark.executor.cores, memory, etc. Desired Candidate Profile 5-9 years of experience in working with PySpark, Python programming language, Spark (Scala), SQL, Hive+. Strong understanding of big data technologies like Hadoop Distributed File System (HDFS) and ability to work with large datasets.

Posted 2 months ago

Apply

4 - 9 years

6 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description Responsibilities: - Maintain ETL pipelines using SQL, Spark, AWS Glue, and Redshift. - Optimize existing pipelines for performance and reliability. - Troubleshoot and resolve UAT / PROD pipeline issues, ensuring minimal downtime. - Implement data quality checks and monitoring to ensure data accuracy. - Collaborate with other teams, and other stakeholders to understand data requirements. Roles & Responsibilities Required Skills and Experience: - Bachelor's degree in Computer Science, Engineering, or a related field. - Extensive experience (e.g., 6+ years) in designing, developing, and maintaining ETL pipelines. - Strong proficiency in SQL and experience with relational databases (e.g., Redshift, PostgreSQL). - Hands-on experience with Apache Spark and distributed computing frameworks. - Solid understanding of AWS Glue and other AWS data services. - Experience with data warehousing concepts and best practices. - Excellent problem-solving and troubleshooting skills. - Strong communication and collaboration1 skills. - Experience with version control systems (e.g., Git). - Experience with workflow orchestration tools (e.g., Airflow). Preferred Skills and Experience: - Experience with other cloud platforms (e.g., Azure, GCP). - Knowledge of data modeling and data architecture principles. - Experience with data visualization tools (e.g., Tableau, Power BI). - Familiarity with Agile development methodologies

Posted 3 months ago

Apply

4 - 9 years

12 - 22 Lacs

Pune

Work from Office

Naukri logo

Must-Have** Strong experience in Snowflake, SQL for data manipulation and engineering tasks. Strong Expertise in SQL and Snowflake architecture Experience in one or more cloud platforms Preferably Azure /AWS/GCP In-depth knowledge of data warehousing concepts and experience in building and managing data warehouses. Knowledge in ETL concepts - needs to manage data migration into Snowflake. This includes extracting data from various sources, transforming it into a usable form, and loading it into the Snowflake platform. Good-to-Have Knowledge of Data architecture, data governance, data quality standards, and data security practices to ensure compliance and protection of sensitive information Strong understanding of data modeling concepts and techniques to create efficient and scalable data models. Familiarity with cloud platforms like AWS, Azure, or Google Cloud for deploying, managing, and scaling data infrastructure and services. Experience with version control systems such as Git for code management and collaboration.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies