Jobs
Interviews

4 Pythonspark Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Primary Skills : Pyspark, Spark and proficient in SQL Education :: btech/bca/bsc/mca/mtech/msc Experience : 5+ Yrs

Posted 3 weeks ago

Apply

3.0 - 5.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA) About URUS We are the URUS family (US), a global leader in products and services for Agritech. SENIOR DATA ENGINEER This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles. Responsibilities Data Integration Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks. Troubleshoot and resolve Databricks pipeline errors and performance issues. Maintain legacy SSIS packages for ETL processes. Troubleshoot and resolve SSIS package errors and performance issues. Optimize data flow performance and minimize data latency. Implement data quality checks and validations within ETL processes. Databricks Development Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL. Migrate legacy SSIS packages to Databricks pipelines. Optimize Databricks jobs for performance and cost-effectiveness. Integrate Databricks with other data sources and systems. Participate in the design and implementation of data lake architectures. Data Warehousing Participate in the design and implementation of data warehousing solutions. Support data quality initiatives and implement data cleansing procedures. Reporting and Analytics Collaborate with business users to understand data requirements for department driven reporting needs. Maintain existing library of complex SSRS reports, dashboards, and visualizations. Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies. Collaboration and Communication Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams. Collaborate effectively with business users, data analysts, and other IT teams. Communicate technical information clearly and concisely, both verbally and in writing. Document all development work and procedures thoroughly. Continuous Growth Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies. Continuously improve skills and knowledge through training and self-learning. This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned. Requirements Bachelor's degree in computer science, Information Systems, or a related field. 2+ years of experience in data integration and reporting. Extensive experience with Databricks, including Python, Spark, and Delta Lake. Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions. Experience with SSIS (SQL Server Integration Services) development and maintenance. Experience with SSRS (SQL Server Reporting Services) report design and development. Experience with data warehousing concepts and best practices. Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Experience with Agile methodologies.

Posted 1 month ago

Apply

7.0 - 10.0 years

20 - 30 Lacs

Bengaluru

Hybrid

Company name: PulseData labs Pvt Ltd (captive Unit for URUS, USA) About URUS We are the URUS family (US), a global leader in products and services for Agritech. SENIOR DATA ENGINEER This role is responsible for the design, development, and maintenance of data integration and reporting solutions. The ideal candidate will possess expertise in Databricks and strong skills in SQL Server, SSIS and SSRS, and experience with other modern data engineering tools such as Azure Data Factory. This position requires a proactive and results-oriented individual with a passion for data and a strong understanding of data warehousing principles. Responsibilities Data Integration Design, develop, and maintain robust and efficient ETL pipelines and processes on Databricks. Troubleshoot and resolve Databricks pipeline errors and performance issues. Maintain legacy SSIS packages for ETL processes. Troubleshoot and resolve SSIS package errors and performance issues. Optimize data flow performance and minimize data latency. Implement data quality checks and validations within ETL processes. Databricks Development Develop and maintain Databricks pipelines and datasets using Python, Spark and SQL. Migrate legacy SSIS packages to Databricks pipelines. Optimize Databricks jobs for performance and cost-effectiveness. Integrate Databricks with other data sources and systems. Participate in the design and implementation of data lake architectures. Data Warehousing Participate in the design and implementation of data warehousing solutions. Support data quality initiatives and implement data cleansing procedures. Reporting and Analytics Collaborate with business users to understand data requirements for department driven reporting needs. Maintain existing library of complex SSRS reports, dashboards, and visualizations. Troubleshoot and resolve SSRS report issues, including performance bottlenecks and data inconsistencies. Collaboration and Communication Comfortable in entrepreneurial, self-starting, and fast-paced environment, working both independently and with our highly skilled teams. Collaborate effectively with business users, data analysts, and other IT teams. Communicate technical information clearly and concisely, both verbally and in writing. Document all development work and procedures thoroughly. Continuous Growth Keep abreast of the latest advancements in data integration, reporting, and data engineering technologies. Continuously improve skills and knowledge through training and self-learning. This job description reflects managements assignment of essential functions; it does not prescribe or restrict the tasks that may be assigned. Requirements Bachelor's degree in computer science, Information Systems, or a related field. 7+ years of experience in data integration and reporting. Extensive experience with Databricks, including Python, Spark, and Delta Lake. Strong proficiency in SQL Server, including T-SQL, stored procedures, and functions. Experience with SSIS (SQL Server Integration Services) development and maintenance. Experience with SSRS (SQL Server Reporting Services) report design and development. Experience with data warehousing concepts and best practices. Experience with Microsoft Azure cloud platform and Microsoft Fabric desirable. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Ability to work independently and as part of a team. Experience with Agile methodologies.

Posted 1 month ago

Apply

6.0 - 10.0 years

0 - 2 Lacs

Gurugram

Remote

We are seeking an experienced AWS Data Engineer to join our dynamic team. The ideal candidate will have hands-on experience in building and managing scalable data pipelines on AWS, utilizing Databricks, and have a deep understanding of the Software Development Life Cycle (SDLC) and will play a critical role in enabling our data architecture, driving data quality, and ensuring the reliable and efficient flow of data throughout our systems. Required Skills: 7+ years comprehensive experience working as a Data Engineer with expertise in AWS services (S3, Glue, Lambda etc.). In-depth knowledge of Databricks, pipeline development, and data engineering. 2+ years of experience working with Databricks for data processing and analytics. Architect and Design the pipeline - e.g. delta live tables Proficient in programming languages such as Python, Scala, or Java for data engineering tasks. Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Experience with ETL/ELT tools and processes in a cloud environment. Familiarity with Big Data processing frameworks (e.g., Apache Spark). Experience with data modeling, data warehousing, and building scalable architectures. Understand/implement security aspects - consume data from different sources Preferred Qualifications: Experience with Apache Airflow or other workflow orchestration tools, Terraform , python, spark will be preferred AWS Certified Solutions Architect, AWS Certified Data Analytics Specialty, or similar certifications.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies