265 Parquet Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

8 - 18 Lacs

Hyderabad

Work from Office

Position Title: Senior Software Engineer Job Location: Hyderabad Education: CSE, ECE, IT, EEE Essential: Python, Vue JS, JavaScript, PostgreSQL Desired: Deployment, Gitlab, Linux Knowledge: AWS, Docker, ETL, Key cloak Experience: 4 to 8 Years: Experience in building and deploying web applications using the Python & Vue JS, React eco system. Experience in Deployment process of web servers (Django) & Vue JS, React JS using Nginx or Apache. Summary of work Environment and Work performed: Develop and maintain web-based applications (including Mobile Web) using mainly Python & JavaScript programming language and Django & Vue JS, React JS Frameworks. Specific Duties: Full stack developer who can u...

Posted 5 months ago

AI Match Score
Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

Job Title: Senior Data Engineer ML & Azure Platform Location: Bangalore Experience: 5 - 10 years Joining Timeframe: Only candidates who can join within 1 month will be considered. Job Description: We are seeking a skilled Senior Data Engineer to work on end-to-end data engineering and data science use cases. The ideal candidate will have strong expertise in Python or Scala, Spark (Databricks), and SQL, and experience building scalable and efficient data pipelines on Azure. Primary Skills: Azure Data Platform Data Factory, Databricks Strong experience in SQL and Python or Scala Experience with ETL/ELT pipelines and transformations Knowledge of Spark , Delta Lake , Parquet , and Big Data techn...

Posted 5 months ago

AI Match Score
Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:AWS Data Engineer Experience5-10 Years Location:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficien...

Posted 5 months ago

AI Match Score
Apply

3.0 - 7.0 years

6 - 10 Lacs

Mumbai

Work from Office

Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Prefe...

Posted 5 months ago

AI Match Score
Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai

Work from Office

So, whats the job? You'll lead the design, development, and optimization of scalable, maintainable, and high-performance ETL/ELT pipelines using Informatica IDMC CDI. You'll manage and optimize cloud-based storage environments, including AWS S3 buckets. You'll implement robust data integration solutions that ingest, cleanse, transform, and deliver structured and semi-structured data from diverse sources to downstream systems and data warehouses. You'll support data integration from source systems, ensuring data quality and completeness. You'll automate data loading and transformation processes using tools such as Python, SQL, and orchestration frameworks. You'll contribute to the strategic t...

Posted 5 months ago

AI Match Score
Apply

10.0 - 15.0 years

25 - 35 Lacs

Pune

Work from Office

Education and Qualifications • Bachelors degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent. Work Experience • Minimum 10 years of experience in data analytics field Minimum 6 years of experience in running operation and support in Cloud Data Lakehouse environment Experience with Azure Databricks Experience in building and optimizing data pipelines, architectures and data sets Excellent experience in Scala or Python Ability to troubleshoot and optimize complex queries on the Spark platform Knowledgeable on structured and unstructured data design / modeling, data access and data storage techniques Experience with DevOps tools and environment Technical / Pro...

Posted 5 months ago

AI Match Score
Apply

9 - 11 years

37 - 40 Lacs

Ahmedabad, Bengaluru, Mumbai (All Areas)

Work from Office

Dear Candidate, We are hiring a Scala Developer to work on high-performance distributed systems, leveraging the power of functional and object-oriented paradigms. This role is perfect for engineers passionate about clean code, concurrency, and big data pipelines. Key Responsibilities: Build scalable backend services using Scala and the Play or Akka frameworks . Write concurrent and reactive code for high-throughput applications . Integrate with Kafka, Spark, or Hadoop for data processing. Ensure code quality through unit tests and property-based testing . Work with microservices, APIs, and cloud-native deployments. Required Skills & Qualifications: Proficient in Scala , with a strong grasp o...

Posted 5 months ago

AI Match Score
Apply

4.0 - 8.0 years

4 - 8 Lacs

gurugram

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed...

Posted Date not available

AI Match Score
Apply

5.0 - 8.0 years

2 - 6 Lacs

bengaluru

Work from Office

Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined.

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

9 - 19 Lacs

pune, chennai, bengaluru

Work from Office

Exp : 4- 10 years Location : Pan India Notice : Immediate to 15 days Required Skills & Qualifications: Proven experience with Databricks , including Unity Catalog in production environments. Strong understanding of data governance , security , and compliance frameworks. Hands-on expertise in PySpark , Parquet , and data engineering tools. Experience with Delta Lake , Delta Sharing , and data quality frameworks . Familiarity with cloud platforms (AWS, Azure, or GCP). Excellent problem-solving and communication skills. Ability to work independently and in a collaborative team environment.

Posted Date not available

AI Match Score
Apply

5.0 - 7.0 years

18 - 22 Lacs

chennai

Work from Office

We are hiring a mid-level GCP Data Engineer with 4+ years of experience in ETL, Data Warehousing, and Data Engineering. Must have hands-on GCP expertise, strong data analysis skills, and a solid grasp of data warehousing principles.

Posted Date not available

AI Match Score
Apply

9.0 - 12.0 years

6 - 10 Lacs

bengaluru

Work from Office

Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Build robust, performing and high scalable, flexible data pipelines with a focus on time to market with quality.Responsibilities: Act as an active team member to ensure high code quality (unit testing, regression tests) delivered in time and within budget. Document the delivered code/solution Participate to the implementation of the releases following the change & release management processes Provide support to the operation team in case of major incidents for which engineering knowledge is required. Participate to effort estimations. Provide solutions (bug fixes) for problem mgt. Additional ...

Posted Date not available

AI Match Score
Apply

1.0 - 4.0 years

10 - 14 Lacs

pune

Work from Office

Overview Our team is responsible for building and maintaining the Fund ESG Ratings platform, a business-critical component that powers MSCI’s ESG offerings at the fund level. We manage the end-to-end lifecycle of fund-level ESG data—from ingestion, transformation, and scoring, to final delivery to internal consumers and external clients. Key responsibilities of the team include: Designing scalable data pipelines using Databricks, PySpark, and BigQuery. Supporting regulatory-compliant ESG scoring and reporting logic. Partnering with ESG Research and Index teams to translate methodology changes into platform features. Ensuring data quality, platform stability, and SLA adherence. Driving automa...

Posted Date not available

AI Match Score
Apply

4.0 - 9.0 years

5 - 9 Lacs

bengaluru

Work from Office

Job TitleSenior DataLake Implementation Specialist Experience: 1012+ Years Location: Bangalore Type: Full-time / Contract Notice Period: Immediate Job Summary: We are looking for a highly experienced and sharp DataLake Implementation Specialist to lead and execute scalable data lake projects using technologies such as Apache Hudi, Hive, Python, Spark, Flink , and cloud-native tools on AWS or Azure . The ideal candidate must have deep expertise in designing and optimizing modern data lake architectures with strong programming skills and data engineering capabilities. Key Responsibilities: Design, develop, and implement robust data lake architectures on cloud platforms (AWS/Azure). Implement s...

Posted Date not available

AI Match Score
Apply

3.0 - 8.0 years

2 - 6 Lacs

pune

Work from Office

ICE Mortgage Technology is driving value to every customer through our effort to automate everything that can be automated in the residential mortgage industry. Our integrated solutions touch each aspect of the loan lifecycle, from the borrower's "point of thought" through e-Close and secondary solutions. Drive real automation that reduces manual workflows, increases productivity, and decreases risk. You will be working in a dynamic product development team while collaborating with other developers, management, and customer support teams. You will have an opportunity to participate in designing and developing services utilized across product lines. The ideal candidate should possess a produc...

Posted Date not available

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies