Job Summary/Overview: We are seeking a highly experienced and skilled Senior GCP Data Engineer to design, develop, and maintain data pipelines and data warehousing solutions on the Google Cloud Platform (GCP). This role requires a strong understanding of data engineering principles and a proven track record of success in building and managing large-scale data solutions. The ideal candidate will be proficient in various GCP services and have experience working with large datasets. Key Responsibilities: * Design, develop, and implement robust and scalable data pipelines using GCP services. * Develop and maintain data warehousing solutions on GCP. * Perform data modeling, ETL processes, and data quality assurance. * Optimize data pipeline performance and efficiency. * Collaborate with other engineers and stakeholders to define data requirements and solutions. * Troubleshoot and resolve data-related issues. * Contribute to the development and improvement of data engineering best practices. * Participate in code reviews and ensure code quality. * Document technical designs and processes. Required Qualifications: * Bachelor's degree in Computer Science, Engineering, or a related field. * 7+ years of experience as a Data Engineer. * Extensive experience with GCP services, including BigQuery, Dataflow, Dataproc, Cloud Storage, and Cloud Pub/Sub. * Proven experience designing and implementing data pipelines using ETL/ELT processes. * Experience with data warehousing concepts and best practices. * Strong SQL and data modeling skills. * Experience working with large datasets. Preferred Qualifications: * Master's degree in Computer Science, Engineering, or a related field. * Experience with data visualization tools. * Experience with data governance and compliance. * Experience with containerization technologies (e.g., Docker, Kubernetes). * Experience with Apache Kafka or similar message queuing systems. Show more Show less
Function: CGSC Finance Name: Global Reporting & Analytics Role: Senior Analyst - Data Analytics and Reporting Reporting to: Team Manager Location: Hyderabad Shift Timings: Business dependent Role Overview: The ideal candidate will combine deep knowledge of finance operations (specifically Payables and SAP FICO) with technical proficiency in Power Platform (Power BI, Power Apps, Power Automate), SQL, Azure, SharePoint, Excel Macros, MS Access database management and Python. This role will be instrumental in driving automation, analytics, and insights to improve financial reporting, compliance, and operational efficiency. Key Responsibilities : · Providing Strategic, Analytic & Reporting support to Global Service Centers and Payables across regions. · MIS reporting for Accounts Payable processes including vendor payments, ageing analysis, GR/IR and payment forecast reports and compliance metrics. · Develop and deploy automated dashboards and reports using Power BI and SQL for internal stakeholders and auditors to bring some clarity to complex AP data. · Automate finance workflows using Power Automate and Excel VBA/Macros —think reconciliation, reminders, and reporting. Explore opportunities to automate the manual processes. · Leverage SAP FICO for reporting, audit trails, and transaction analysis. Identify, analyze, and interpret trends or patterns in complex data sets. Transform data using Python and SQL for reporting. · Manage data pipelines through Azure Data Services , integrating inputs from SAP, Excel, and cloud databases. · Use Python for automation : bulk file processing, vendor statement reconciliation, and email/report workflows automation. • Competent in Analysis & Judgment, Customer Relationship Management, BI tools & Microsoft Suite. Should have sufficient Procure to Pay knowledge. Partner with Procurement, Supply Chain, IT, and Treasury teams to ensure data consistency and reporting alignment. • Manage, coach and develop team members • Explore and implement continuous improvement with an owner’s mind-set. • Accountable for managing the Supplier Payments database for entire organization and provide Strategic, Analytic & Reporting support to Global Service Centers and P2P across regions. Key Requirements: Education & Experience: Bachelor’s or Master’s degree in Finance, Accounting, or a related field. 6–10 years of relevant experience in Finance MIS or AP analytics roles. Strong working knowledge of SAP FICO – especially AP-related T-codes and tables. Knowledge of ERP system, statistics and experience using statistical packages for analyzing large datasets (Excel, SPSS, SAS etc) is preferable. Technical Skills: Strong Knowledge on reporting packages [Business objects] Advanced Excel with hands-on experience in VBA/macros . Proficiency in Power BI , Power Automate , and Power Apps . Strong SQL scripting and experience in working with relational databases. Exposure to Microsoft Azure (Data Factory, Synapse, or Logic Apps) is highly desirable. Experience in data modeling, cleansing , and performance tuning for large datasets. Python for data analysis and automation (e.g., pandas, matplotlib, openpyxl) Soft Skills: Strong analytical mindset and attention to detail. Effective communication and ability to collaborate with cross-functional teams. Proactive problem-solver with a process improvement orientation. Ability to manage deadlines and prioritize in a fast-paced environment Preferred Certifications (Optional but a plus): Microsoft Certified: Power Platform Fundamentals or Data Analyst Associate SAP FICO Certification Azure Data Fundamentals
We are seeking a skilled Data Engineer (minimum 4+ years experience) with a strong foundation in Python and modern data engineering practices. You will be responsible for building robust, scalable, and efficient data pipelines that power our analytics and product data needs. This role is central to ensuring data quality, reliability, and availability across the organization. You will work closely with data scientists, analysts, and product engineers to support data-driven decision-making and product development. Core Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Python Build and optimize data workflows for batch and real-time processing Integrate data from various internal and third-party systems Implement data quality checks, validation, and monitoring mechanisms Collaborate with cross-functional teams to understand data requirements Maintain data warehouse models and schemas to support business intelligence tools Ensure adherence to data governance and security standards Support orchestration and scheduling of pipelines in production Core Skillset & Technical Requirements Python (Primary Skill) Strong experience writing production-grade Python code in a data engineering context Familiarity with object-oriented design , code modularity, and clean architecture in Python Solid understanding of performance optimization for data processing in Python Key Python Libraries & Tools Pandas, NumPy – for data manipulation SQLAlchemy / PySpark – for interacting with databases or distributed computing Requests / HTTPX – for working with APIs and third-party data sources Pydantic / Marshmallow – for data validation Typer / Click / argparse – for CLI applications and script automation Airflow / Prefect / Dagster – for orchestration (Dagster experience is a plus ) Great Expectations – for data validation (optional but desirable) Data Integration Tools Experience with data ingestion and transformation pipelines Familiarity with Airbyte (desirable) Databases & Storage Proficiency with SQL and experience with relational databases (e.g., PostgreSQL , MySQL) Experience with cloud-native or columnar data stores like Amazon Redshift , S3 , or Parquet/Delta Lake Cloud Platforms Preferred: Experience with AWS ecosystem (S3, Lambda, Glue, Redshift, etc.) Knowledge of cloud security best practices and IAM principles Other Desirable Skills Containerization using Docker Experience with CI/CD workflows for data pipelines Familiarity with version control (Git) and code review processes Exposure to data modeling and schema design (e.g., dimensional modeling, star/snowflake schema) Preferred Qualifications 4+ years of experience in data engineering or related field Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline Strong problem-solving and communication skills Experience working in Agile or fast-paced startup environments