Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
8 - 11 Lacs
Noida
Work from Office
Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines to support data processing, transformation, and analysis. Data Modeling: Develop and implement data models and schemas to optimize data storage, retrieval, and analysis, ensuring efficiency and accuracy. Data Integration: Integrate data from various sources and systems, including databases, APIs, and external sources, to create unified datasets for analysis and reporting. ETL Processes: Design and implement efficient Extract, Transform, Load (ETL) processes to extract data from source systems, transform it into a usable format, and load it into data warehouses or other storage systems. Data Quality Assurance: Implement data quality checks and validation processes to ensure data accuracy, completeness, and consistency across datasets. Performance Optimization: Optimize data pipelines and processing workflows for performance, scalability, and cost-effectiveness, leveraging cloud computing platforms and distributed computing frameworks. Data Governance: Establish and enforce data governance policies and standards, ensuring compliance with data privacy regulations and security best practices. Collaboration: Collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and deliver solutions that meet business needs. Documentation and Maintenance: Document data pipelines, workflows, and processes, and provide ongoing maintenance and support to ensure the reliability and integrity of data infrastructure. Requirements: Bachelor's degree in Computer Science, Engineering, or related field. 3+ years of experience in data engineering or related roles, with a strong focus on building and maintaining data pipelines and infrastructure. Proficiency in programming languages such as Python, SQL, and/or Scala. Experience with data warehousing technologies (e.g., Snowflake, Redshift, BigQuery) and distributed computing frameworks (e.g., Spark). Solid understanding of data modeling concepts, relational and non-relational databases, and data storage solutions (e.g., HDFS, S3). Experience with data integration tools and technologies (e.g., Apache Kafka, Apache NiFi). Strong problem-solving skills and ability to analyze complex data challenges and propose effective solutions. Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Proactive attitude towards learning and professional development, with a passion for exploring new technologies and approaches in the field of data engineering. Preferred: Master's degree in Computer Science, Engineering, or related field. Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and associated services (e.g., AWS Glue, Azure Data Factory). Knowledge of data governance frameworks and data privacy regulations (e.g., GDPR, CCPA). Certifications in data engineering or related fields (e.g., AWS Certified Big Data - Specialty, Google Professional Data Engineer).
Posted 20 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
27534 Jobs | Dublin
Wipro
14175 Jobs | Bengaluru
Accenture in India
9809 Jobs | Dublin 2
EY
9787 Jobs | London
Amazon
7964 Jobs | Seattle,WA
Uplers
7749 Jobs | Ahmedabad
IBM
7414 Jobs | Armonk
Oracle
7069 Jobs | Redwood City
Muthoot FinCorp (MFL)
6164 Jobs | New Delhi
Capgemini
5421 Jobs | Paris,France