Jobs
Interviews

7 Ingestion Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Hello, Greetings from ZettaMine!! Hiring For Data engineer Exp: 6 to 10 Years Location: Bangalore Looking for immediate joiners only Job description. Skills Required: Python, SQL, ETL Azure Data Services (ADF, Synapse), Databricks Data Pipelines, Ingestion (Batch & Streaming), Data Migration Strong in data modelling, transformation, and performance tuning Interested candidates can share updated cv on [HIDDEN TEXT] Thanks & Regards Afreen Show more Show less

Posted 2 days ago

Apply

4.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Developer specializing in data modeling and ingestion, you will be responsible for designing, developing, and managing scalable database solutions using MongoDB. Your role will involve writing robust, effective, and scalable queries and operations for MongoDB-based applications. Additionally, you will integrate third-party services, tools, and APIs with MongoDB for data management and processing. Collaboration will be a key aspect of your job as you work closely with developers, data engineers, and stakeholders to ensure the seamless integration of MongoDB with applications and systems. You will also be involved in running unit, integration, and performance tests to guarantee the stability and functionality of MongoDB implementations. Conducting code and database reviews will be part of your routine to ensure adherence to security, scalability, and best practices in MongoDB development. Preferred experience with Snowflakes is a plus for this position. This is a full-time role with benefits including health insurance and Provident Fund. The work location is in-person, following a Monday to Friday schedule. The ideal candidate should have a minimum of 4 years of experience in MongoDB, data modeling, and ingestion. Join our team and contribute to the development of cutting-edge database solutions using MongoDB!,

Posted 5 days ago

Apply

5.0 - 9.0 years

0 - 17 Lacs

Hyderabad

Work from Office

Experience: 3-5 years of prior Product Management experience working with data warehouses, clouds, or AdTech platforms. Bachelor’s or Master of Science in Computer Science, Information Systems, Business or related degree Ability to lead demonstrations for technical and non-technical audiences Deep understanding of APIs and how they work/operate Proven ability to create product artifacts, including: Product requirement documents (PRDs),epics, story mapping, OKRs, etc. High-level understanding of the Product Lifecycle (PDLC) Basic understanding of coding and software development understanding Excellent attention to detail Excellent written and verbal communication skills Type S(tartup) personality: smart, ethical, friendly, hard-working and proactive (no exceptions) Great problem solver - you thrive on finding creative solutions to poorly defined problems that impact customers Bonus Points: Prior experience in relational databases, data warehouses and cloud environments Data science principles and knowledge

Posted 5 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad, Pune, Gurugram

Hybrid

Role: Data Engineer Experience: 5+ Years Location: Pune, Gurgaon & Bangalore Hybrid Shift Time: 12:00 PM - 10:00 PM Must have: Experience working in AWS, Redshift, Python Prior exposure to Data Ingestion and Curation work (such as working with Data Lakehouse) Knowledge in SQL for purpose of data analysis/investigation Help and support the Data Product Owner) to manage and deliver on the product technical roadmap Ability to digest and understand what the data is, how it is derived, meaning/context around the data itself and how the data fits into NFLs data model Working knowledge of Confluence and JIRA Good to have: Masters degree in computer science, statistics, or related discipline 5+ years as a data/business analyst or business intelligence developer/analytics engineer Proficiency and/or certification in Cloud Data Technologies Hands on experience on API Integration and One Trust Comfortable making decisions and leading Familiar with version control and relational databases Superior communication skills both oral and written Positive contributor, strong team member, loves to work with and empower others Collaborates with a team Time management skills Project management skills Responsibilities: Develop data pipeline using python, SQL on AWS platform. Document and capture the use cases, business logics/rules for the assigned data domains and also working with Data Analyst and Data Product Owners across domains to ensure alignment across the entire data platform. Gather and capture the technical specifications for the incorporation of a variety of data sources into the model and working with internal and external partners and vendor to understand and capture the integration method and pattern. Ensure Specifications covers various aspect of how to integrate the data including any transformations/logics required for data validation, standardizations, curations and reporting to fulfil the relevant use cases Work with internal stakeholders to define and capture any fulfilment requirements such as outbound data deliveries, reporting and metrics. Provide support during UAT and release management tasks such as smoke testing against requirements. Prioritize to manage ad-hoc requests in parallel with ongoing sprints Participate with the team to execute sound solutions and approaches to meet business expectations in an efficient manner; Work with Data Engineers, Data Architects, Data Governance and QA to create and review the pipelines, data ingestion, storage, wrangling, cataloguing, quality, curation of various data sources. Work with the Data Product Owners to help manage and deliver on the product technical roadmap for the data platform Experienced with Scrum and Agile Methodologies to coordinate global delivery teams, run scrum ceremonies, manage backlog items, and handle escalations Work with the Data Product Owners to help manage and deliver on the product technical roadmap for the data platform Education: BE/B.Tech/MS/M.Tech/ME from reputed institute. Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow!

Posted 3 weeks ago

Apply

8.0 - 12.0 years

18 - 27 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Job Description: Design, implement, and maintain data pipelines and data integration solutions using Azure Synapse Develop and optimize data models and data storage solutions on Azure Collaborate with data scientists and analysts to implement data processing and data transformation tasks. Ensure data quality and integrity through data validation and cleansing methodologies. Monitor and troubleshoot data pipelines to identify and resolve performance issues Collaborate with cross-functional teams to understand and prioritize data requirements. Stay up-to-date with the latest trends and technologies in data engineering and Azure services. Skills & Qualifications: Bachelors degree in IT, computer science, computer engineering, or similar 8+ years of experience in Data Engineering. Microsoft Azure Synapse Analytics experience is essential. (Azure Data Factory, Dedicated SQL Pool, Lake Database, Azure Storage) Hands on experience in Spark notebooks(python or Scala) is mandatory End-to-end Data Warehouse experience: Ingestion, ETL, big data pipelines, data architecture, message queuing, BI/Reporting, and Data Security. Advanced SQL/relational database knowledge and query authoring. Demonstrated experience in design and delivering data platforms for Business Intelligence and Data Warehouse. Strong skills in handling and analysing complex, high volume data with excellent attention in details. Knowledge of data modelling and data warehousing concepts, such as DataVault or 3NF. Experience with Data Governance (Quality, Lineage, Data dictionary, and Security). Familiar with Agile methodology and Agile working environment. Ability to work alone with POs, BAs, Architects.

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

Bengaluru

Work from Office

Data Engineers/ Analysts that can create the data models for Apromore Ingestion

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Bengaluru

Remote

Role & responsibilities Technical Capability Foundry Certified (Data Engineering) Foundry Certified (Foundational) Microsoft Certified (Azure AI Fundamentals) Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Data Engineer Associate Ontology Manager Pipeline Builder Data Linerage Object Explorer SQL Python & Scala Good knowledge of Azure cloud & ADF & Databricks Spark (Pyspark & Scala Spark) Troubleshooting jobs & finding the root cause of the issue Advanced ETL pipeline design for data ingestion & egress for batch dataFoundry Certified (Data Engineering) Experience 4+ Soft Skills Good communication skills Good documentation skills for drafting problem definition and solution Ability to work independently with very little supervision, including engagement with product managers and technical/domain experts Ability to effectively gather requirements and propose solution design SFIA Score Preferred candidate profile

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies