Posted:2 days ago|
Platform:
Remote
Full Time
🚫 Do NOT apply if:
You do not have Expertise in Big Query,Gcp Functions
Our client is a trusted global innovator of IT and business services, present in 50+ countries. They specialize in digital & IT modernization, consulting, managed services, and industry-specific solutions. With a commitment to long-term success, they empower clients and society to move confidently into the digital future.
Key Responsibilities:
● Design, develop, test, and maintain scalable ETL data pipelines using Python.
● Work extensively on Google Cloud Platform (GCP) services such as:
○ Dataflow for real-time and batch data processing
○ Cloud Functions for lightweight serverless compute
○ BigQuery for data warehousing and analytics
○ Cloud Composer for orchestration of data workflows (based on Apache
Airflow)
○ Google Cloud Storage (GCS) for managing data at scale
○ IAM for access control and security
○ Cloud Run for containerized applications
Should have experience in the following areas :
○ API framework: Python FastAPI
○ Processing engine: Apache Spark
○ Messaging and streaming data processing : Kafka
○ Storage: MongoDB, Redis/Bigtable
○ Orchestration: Airflow
● Perform data ingestion from various sources and apply transformation and
cleansing logic to ensure high-quality data delivery.
● Implement and enforce data quality checks, validation rules, and monitoring.
● Collaborate with data scientists, analysts, and other engineering teams to
understand data needs and deliver efficiedata solutions.
● Manage version control using GitHub and participate in CI/CD pipeline
deployments for data projects.
● Write complex SQL queries for data extraction and validation from relational
databases such as SQL Server, Oracle, or PostgreSQL.
● Document pipeline designs, data flow diagrams, and operational support
procedures.
Required Skills:
● 5 years of hands-on experience in Python for backend or data engineering
projects.
● Strong understanding and working experience with GCP cloud services
(especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
● Solid understanding of data pipeline architecture, data integration, and
transformation techniques.
● Experience in working with version control systems like GitHub and knowledge of
CI/CD practices.
● Experience in Apache Spark, Kafka, Redis, Fast APIs, Airflow, GCP Composer DAGs.
● Strong experience in SQL with at least one enterprise database (SQL Server,
Oracle, PostgreSQL, etc.).
● Experience in data migrations from on-premise data sources to Cloud platforms.
Good to Have (Optional Skills):
● Experience working with Snowflake cloud data platform.
● Experience in deployments in GKE, Cloud Run.
● Hands-on knowledge of Databricks for big data processing and analytics.
● Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools.
Additional Details:
● Excellent problem-solving and analytical skills.
● Strong communication skills and ability to collaborate in a team environment
People Prime Worldwide
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowbengaluru
Experience: Not specified
2.25 - 2.75 Lacs P.A.
bengaluru
Experience: Not specified
2.25 - 2.75 Lacs P.A.
hyderabad, chennai
9.0 - 19.0 Lacs P.A.
bengaluru
30.0 - 45.0 Lacs P.A.
chennai
10.0 - 20.0 Lacs P.A.
gurugram
10.0 - 20.0 Lacs P.A.
Salary: Not disclosed
delhi, india
Salary: Not disclosed
6.5 - 6.5 Lacs P.A.
hyderābād
Experience: Not specified
3.25 - 8.0 Lacs P.A.