Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
As a GCP Data Engineer specialized in Data Migration & Transformation, you will be responsible for designing and constructing robust, scalable data pipelines and architectures on Google Cloud Platform (GCP), particularly focusing on BigQuery. Your primary tasks will involve migrating and transforming large-scale data systems and datasets to GCP while emphasizing performance, scalability, and reliability. It will be crucial for you to automate data lineage extraction and ensure data integrity across various systems and platforms. Collaborating closely with architects and stakeholders, you will play a key role in implementing GCP-native and 3rd-party tools for data ingestion, integration, and transformation. Additionally, your role will include the development and optimization of complex SQL queries in BigQuery for data analysis and transformation. You will be expected to operationalize data pipelines using tools such as Apache Airflow (Cloud Composer), DataFlow, and Pub/Sub, enabling machine learning capabilities through well-structured, ML-friendly data pipelines. Participation in Agile processes and contributing to technical design discussions, code reviews, and documentation will be integral parts of your responsibilities. Your background should include at least 5 years of experience in Data Warehousing, Data Engineering, or similar roles, with a minimum of 2 years of hands-on experience working with GCP BigQuery. Proficiency in Python, SQL, Apache Airflow, and various GCP services including BigQuery, DataFlow, Cloud Composer, Pub/Sub, and Cloud Functions is essential. You should possess experience in data pipeline automation, data modeling, and building reusable data products. A solid understanding of data lineage, metadata integration, and data cataloging, preferably using tools like GCP Data Catalog and Informatica EDC, will be beneficial. Demonstrated ability to analyze complex datasets, derive actionable insights, and build/deploy analytics platforms on cloud environments, preferably GCP, is required. Preferred skills for this role include strong analytical and problem-solving capabilities, exposure to machine learning pipeline architecture and model deployment workflows, excellent communication skills, and the ability to collaborate effectively with cross-functional teams. Familiarity with Agile methodologies, DevOps best practices, a self-driven and innovative mindset, and experience in documenting complex data engineering systems and developing test plans will be advantageous for this position.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Data Quality Integration Engineer, your primary responsibility will be to incorporate data quality capabilities into enterprise data landscapes. You will play a crucial role in integrating advanced data quality tools like Ataccama and Collibra with cloud data platforms such as Snowflake and SQL databases. Your role is essential in ensuring that data governance standards are met through robust, scalable, and automated data quality processes. In this role, you will need to develop scalable applications using appropriate technical options and optimize application development, maintenance, and performance. You will be required to implement integration of data quality tools with Snowflake and SQL-based platforms, develop automated pipelines and connectors for data profiling, cleansing, monitoring, and validation, and configure data quality rules aligned with governance policies and KPIs. Troubleshooting integration issues, monitoring performance, and collaborating with various teams to align solutions with business needs will also be part of your responsibilities. You will need to adhere to coding standards, perform peer reviews, write optimized code, create and review design documents, templates, test cases, and checklists, and develop and review unit and integration test cases. Additionally, you will estimate efforts for project deliverables, track timelines, perform defect RCA, trend analysis, and propose quality improvements, and mentor team members while managing aspirations and keeping the team engaged. To excel in this role, you must have strong experience with data quality tools like Ataccama and Collibra, hands-on experience with Snowflake and SQL databases, proficiency in SQL scripting and data pipeline development (preferably Python or Scala), and a sound understanding of data profiling, cleansing, enrichment, and monitoring. Knowledge of REST APIs, metadata integration techniques, and cloud platforms like AWS and Azure would be advantageous. Furthermore, soft skills such as strong analytical and problem-solving abilities, effective communication and presentation skills, and the ability to manage high-pressure environments and multiple priorities are essential. Certification in Ataccama, Collibra, Snowflake, AWS, or Azure, along with domain knowledge in enterprise data architecture and financial services, insurance, or asset management domains, would be beneficial for this role.,
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |