Were Hiring Engineers (Informatica + GCP + SQL) Note: Please read the post carefully and share only relevant profiles. Were specifically looking for professionals with solid experience in Informatica + GCP + SQL. Location: Pune (Hybrid) We are looking to hire Engineers for exciting data projects based in Pune! Engineer Roles Experience: 3-7 years Skills: Strong hands-on experience with Informatica Strong exposure to Google Cloud Platform (GCP) (data pipelines, integration, etc.) Solid command of SQL Leadership in delivering data solutions Interested or know someone who fits the bill? Please send me your resume to pranita.thapa@onixnet.com with the subject "Informatica GCP " .
About the Role We are looking for a skilled GCP Data Engineer to design, build, and optimize scalable data pipelines and platforms on Google Cloud. The ideal candidate will have hands-on experience with BigQuery, Dataflow, Composer, and Cloud Storage , along with strong SQL and programming skills. Key Responsibilities Design, build, and maintain ETL/ELT pipelines on GCP. Develop scalable data models using BigQuery and optimize query performance. Orchestrate workflows using Cloud Composer (Airflow) . Work with both structured and unstructured data from diverse sources. Implement data quality checks, monitoring, and governance frameworks. Collaborate with Data Scientists, Analysts, and Business teams to deliver reliable datasets. Ensure data security, compliance, and cost optimization on GCP. Debug, monitor, and improve existing pipelines for reliability and efficiency. Required Skills & Experience Strong experience in GCP services : BigQuery, Dataflow, Pub/Sub, Cloud Storage, Cloud Composer. Expertise in SQL (BigQuery SQL / Presto SQL) and performance tuning. Hands-on experience in Python/Java/Scala for data processing. Experience with workflow orchestration tools (Airflow, Composer, or similar). Familiarity with CI/CD pipelines, GitHub, and deployments . Knowledge of data warehouse design, dimensional modeling, and best practices . Strong problem-solving and analytical skills. Nice-to-Have Skills Experience with other cloud platforms (AWS/Azure) is a plus. Exposure to Machine Learning pipelines on GCP (Vertex AI). Knowledge of Terraform/Infrastructure as Code . Understanding of real-time streaming solutions (Kafka, Pub/Sub). Education Bachelors/Master’s degree in Computer Science, Engineering, or related field.
We are looking for a highly skilled Senior Data Engineer with 4 to 6 years of experience in building and managing advanced data solutions. The ideal candidate should have extensive experience with SQL, Ab-Initio, Teradata and Google Cloud Platform (GCP). Key Responsibilities Be the part of the team towards design, development, and optimization of large-scale data pipelines, ensuring they meet business and technical requirements. Implement data solutions using SQL, Ab-Initio, Teradata and GCP; ensuring scalability, reliability, and performance. Mentor and guide the colleagues in the development and execution of ETL processes and data integration solutions. Take ownership of end-to-end data workflows, from data ingestion to transformation, storage, and accessibility. Lead performance tuning and optimization efforts for complex SQL queries and Teradata database systems. Design and implement data governance, quality, and security best practices to ensure data integrity and compliance. Manage the migration of legacy data systems to cloud-based solutions on Google Cloud Platform (GCP). Ensure continuous improvement and automation of data pipelines and workflows. Troubleshoot and resolve issues related to data quality, pipeline performance, and system integration. Stay up-to-date with industry trends and emerging technologies to drive innovation and improve data engineering practices within the team. Required Skills 4 to 6 years of experience in data engineering or related roles. Strong expertise in SQL, Teradata, and Ab-Initio. Proficient and good hands on experience in using the Ab-Initio development products - GDE, Conduct>IT, Control Center, Continuous Flows, EME, M-HUB, etc. Hands-on experience over Unix/Shell scripting. Proficient in scheduling and analyzing jobs using Autosys (or any similar) Scheduler. In-depth experience with Google Cloud Platform (GCP), including tools like BigQuery, Cloud Storage, Dataflow, etc. Proven track record of leading teams and projects related to data engineering and ETL pipeline development. Experience with data warehousing and cloud-native storage solutions. Strong analytical and problem-solving skills. Experience in setting up and enforcing data governance, security, and compliance standards. Preferred (good to have) Skills Familiarity with other cloud platforms/services (AWS, Azure). Familiarity with any other ETL tools like Informatica, Datastage, Alteryx, etc. Knowledge of Big Data technologies like Hadoop, Spark, Hive, Scala, etc. Strong communication skills and the ability to collaborate effectively with both technical and non-technical teams.